andvaranaut - The dungeon crawler

  •        2

See the Makefile for instructions on how to bulid for Linux, MacOS, and Windows. Item art by Platino.

https://github.com/glouw/andvaranaut

Tags
Implementation
License
Platform

   




Related Projects

Dungeon Crawl Reference

  •    Lua

Dungeon Crawl Stone Soup is a free rogue-like game of exploration and treasure-hunting. Stone Soup is a continuation of Linley's Dungeon Crawl. It is openly developed and invites participation from the Crawl community. See http://crawl.develz.org !

crawl - Dungeon Crawl: Stone Soup official repository

  •    C++

Dungeon Crawl Stone Soup is a game of dungeon exploration, combat and magic, involving characters of diverse skills, worshipping deities of great power and caprice. To win, you'll need to be a master of tactics and strategy, and prevail against overwhelming odds. There is also an ingame list of frequently asked questions which you can access by typing ?Q.

crawler - An easy to use, powerful crawler implemented in PHP. Can execute Javascript.

  •    PHP

This package provides a class to crawl links on a website. Under the hood Guzzle promises are used to crawl multiple urls concurrently. Because the crawler can execute JavaScript, it can crawl JavaScript rendered sites. Under the hood Chrome and Puppeteer are used to power this feature.

yacy_grid_crawler - Crawler Microservice for the YaCy Grid

  •    Java

The Crawler is a microservices which can be deployed i.e. using Docker. When the Crawler Component is started, it searches for a MCP and connect to it. By default the local host is searched for a MCP but you can configure one yourself. Every loader and parser microservice must read this crawl profile information. Because that information is required many times, we omit a request into the cawler index by adding the crawler profile into each contract of a crawl job in the crawler_pending and loader_pending queue.

fetchbot - A simple and flexible web crawler that follows the robots.txt policies and crawl delays.

  •    Go

Package fetchbot provides a simple and flexible web crawler that follows the robots.txt policies and crawl delays.The package has a single external dependency, robotstxt. It also integrates code from the iq package.


grab-site - The archivist's web crawler: WARC output, dashboard for all crawls, dynamic ignore patterns

  •    Python

grab-site is an easy preconfigured web crawler designed for backing up websites. Give grab-site a URL and it will recursively crawl the site and write WARC files. Internally, grab-site uses wpull for crawling. a dashboard with all of your crawls, showing which URLs are being grabbed, how many URLs are left in the queue, and more.

Acies

  •    

Acies is a dungeon crawler game done with C# and XNA.

frontera - A scalable frontier for web crawlers

  •    Python

Frontera is a web crawling framework consisting of crawl frontier, and distribution/scaling primitives, allowing to build a large scale online web crawler. Frontera takes care of the logic and policies to follow during the crawl. It stores and prioritises links extracted by the crawler to decide which pages to visit next, and capable of doing it in distributed manner.

Banished

  •    

A dungeon crawler with a storyline.

text dungeon

  •    

a small dungeon crawler in the windows console using ascii characters.

WebCrawler and Entity Extraction using Fetch and process frame work

  •    

Web Crawler using Fetch And Process Framework. Yes , it does processing of robots.txt

Aperture - Java framework for getting data and metadata

  •    Java

Aperture is a Java framework for extracting and querying full-text content and metadata from various information systems. It could crawl and extract information from File system, Websites, Mail boxes and Mail servers. It supports various file formats like Office, PDF, Zip and lot more. Metadata information is extracted from image files. Aperture has a strong focus on semantics, metadata extracted could be mapped to predefined properties.

Scrapy - Web crawling & scraping framework for Python

  •    Python

Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.

dhtcrawler2 - dhtcrawler is a DHT crawler written in erlang

  •    Shell

dhtcrawler is a DHT crawler written in erlang. It can join a DHT network and crawl many P2P torrents. The program save all torrent info into database and provide an http interface to search a torrent by a keyword. dhtcrawler2 is an extended version to dhtcrawler. It has improved a lot on crawling speed, and much more stable.

Photon - Incredibly fast crawler designed for recon.

  •    Python

The extracted information is saved in an organized manner or can be exported as json. Control timeout, delay, add seeds, exclude URLs matching a regex pattern and other cool stuff. The extensive range of options provided by Photon lets you crawl the web exactly the way you want.

Pit of Despair

  •    

An XNA 4.0 game in C# focused on learning to write overhead dungeon crawl games. Inpired from games such as Zelda, Wizardy, perhaps some original Final Fantasy.

hauberk - A web-based roguelike written in Dart.

  •    Dart

Hauberk is a roguelike, an ASCII-art based procedurally-generated dungeon crawl game. It's written in Dart and runs in your browser. To get it up and running locally, you'll need to have the Dart SDK installed. I use the latest dev channel release of Dart, which you can get from here.

Mikes Adventure Game - Roguelike

  •    C

Mikes Adventure Game is a mature 1980s roguelike RPG game like Nethack, rogue, moria, angband, dungeon crawl, hack and Adom. It's a win32 port with no gameplay changes. Not associated with original author Mike Teixeira(help from him would be appreciated

Tiles for Roguelike games

  •    

Graphical tiles for Roguelike Games (NetHack, Dungeon Crawl, etc.)

Secret of Java

  •    Java

A java game that was developed for a class project. The original intention was to make it similar to Secret of Mana, but it became more of a dungeon crawler. (8/15/09) Development was slowed due to Summer. We should be resuming development shortly.