Displaying 1 to 7 from 7 results

apify-js - Apify SDK: The ultimate web scraping and automation library for JavaScript / Node

  •    Javascript

The package provides helper functions to launch web browsers with proxies, access the storage etc. Note that the usage of the package is optional, you can create acts on Apify platform without it. If you deploy your code to Apify platform then you can set up scheduler or execute your code with web API.

bancocentralbrasil - :brazil: Informações sobre taxas oficiais diárias de Inflação, Selic, Poupança, Dólar e Euro pelo site do Banco Central do Brasil

  •    Python

:brazil: Informações sobre taxas oficiais diárias de Inflação, Selic, Poupança, Dólar e Euro pelo site do Banco Central do Brasil

Seen - A lightweight crawling/spider framework for everyone(support JavaScript!).:sparkles:

  •    Python

Seen is a lightweight web crawling framework for everyone. Written with asyncio,aiohttp/requests. It is useful for writing a web crawling quickly and get FULL JavaScript Support.




scrapy-training - Scrapy Training companion code

  •    Python

This repository contains the companion files for the "Crawling the Web with Scrapy" training program. You can either clone it using git or download the files from this zip file. Contact us here if you (or your company) are interested in Scrapy training and coaching sessions.

spidyquotes - Example site for web scraping tutorials

  •    Julia

This is an example site for web scraping tutorials. To run in development mode, use the --debug flag.

krawler - A web crawling framework written in Kotlin

  •    Kotlin

Krawler is a web crawling framework written in Kotlin. It is heavily inspired by crawler4j by Yasser Ganjisaffar. The project is still very new, and those looking for a mature, well tested crawler framework should likely still use crawler4j. For those who can tolerate a bit of turbulence, Krawler should serve as a replacement for crawler4j with minimal modifications to existing applications. Using the Krawler framework is fairly simple. Minimally, there are two methods that must be overridden in order to use the framework. Overriding the shouldVisit method dictates what should be visited by the crawler, and the visit method dictates what happens once the page is visited. Overriding these two methods is sufficient for creating your own crawler, however there are additional methods that can be overridden to privde more robust behavior.





We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.