Aperture - Java framework for getting data and metadata

  •        0

Aperture is a Java framework for extracting and querying full-text content and metadata from various information systems. It could crawl and extract information from File system, Websites, Mail boxes and Mail servers. It supports various file formats like Office, PDF, Zip and lot more. Metadata information is extracted from image files. Aperture has a strong focus on semantics, metadata extracted could be mapped to predefined properties.

http://aperture.sourceforge.net/

Tags
Implementation
License
Platform

   




Related Projects

SMILA - Unified information access architecture


SMILA is an extensible framework for building search solutions to access unstructured information in the enterprise. Besides providing essential infrastructure components and services, SMILA also delivers ready-to-use add-on components, like connectors to most relevant data sources. Using the framework as their basis will enable developers to concentrate on the creation of higher value solutions, like semantic driven applications etc.

OpenPipe - Document Pipeline


OpenPipe is an open source scalable platform for manipulating a stream of documents. A pipeline is an ordered set of steps / operations performed on a document to convert from its raw form to something ready to be put into the index. The operations performed on documents include language detection, field manipulation, POS tagging, entity extraction or submitting the document to a search engine.

Arachnode.net


An open source .NET web crawler written in C# using SQL 2005/2008. Arachnode.net is a complete and comprehensive .NET web crawler for downloading, indexing and storing Internet content including e-mail addresses, files, hyperlinks, images, and Web pages.

UIMA - Unstructured information management architecture


UIMA analyzes large volumes of unstructured information in order to discover knowledge that is relevant to an end user. It is a framework with different set of components. The components include Language Identification, Language specific segmentation, Sentence boundary detection, Entity detection (person/place names) etc. The framework manages these components and the data flows between them.

Gate - General Architecture for Text Engineering


GATE excels at text analysis of all shapes and sizes. It provides support for diverse language processing tasks such as parsers, morphology, tagging, Information Retrieval tools, Information Extraction components for various languages, and many others. It provides support to measure, evaluate, model and persist the data structure. It could analyze text or speech. It has built-in support for machine learning and also adds support for different implementation of machine learning via plugin.

ManifoldCF - Framework for connecting Source Content Repositories


ManifoldCF is an effort to provide an open source framework for connecting source content repositories like Microsoft Sharepoint, EMC Documentum FileNet, LiveLink (OpenText), Patriarch, Meridio (Autonomy), Windows shares to target repositories or indexes such as Apache Solr, QBase (formerly MetaCarta). It could also retrieve content from file system, JDBC connector, RSS crawler, and web crawler.

NCrawler


Simple and very efficient multithreaded web crawler with pipeline based processing written in C#. Contains HTML, Text, PDF, and IFilter document processors and language detection(Google). Easy to add pipeline steps to extract, use and alter information.

Constellio - Enterprise Search engine


Constellio Open Source Enterprise Search is based on Apache Solr and using Google Search Appliances connectors architecture, it allows, with a single click, to find all relevant content in your organization (Web, email, ECM, CRM etc.).

Tikka - A content analysis toolkit


Apache Tika is a toolkit for detecting and extracting metadata and structured text content from various documents using existing parser libraries.

Open Search Server


Open Search Server is both a modern crawler and search engine and a suite of high-powered full text search algorithms. Built using the best open source technologies like lucene, zkoss, tomcat, poi, tagsoup. Open Search Server is a stable, high-performance piece of software.

Nutch


Nutch is open source web-search software. It builds on Lucene Java, adding web-specifics, such as a crawler, a link-graph database, parsers for HTML and other document formats, etc.

Scrapy - Web crawling & scraping framework for Python


Scrapy is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.

TiebaPostGrabber - This crawler grab the text content of the posting in threads. For Baidu::Tieba


This crawler grab the text content of the posting in threads. For Baidu::Tieba

Heritrix


Heritrix is the Internet Archive's open-source, extensible, web-scale, archival-quality web crawler project. Heritrix is designed to respect the robots.txt exclusion directives and META robots tags, and collect material at a measured, adaptive pace unlikely to disrupt normal website activity.

Web-crawler


Simple part: Create a web crawler app in python which, given a url seed, can crawl through all links on the page and scan deep for a given level of depth. While crawling the app should be able to return the url page containing a specific search text. Input: 1 - Url seed e.g. www.hackernews.com 2 - Depth e.g. 5 (this means go into links on a page till 5 levels) 3 - search text e.g. "python" Output: the list of url that contains the specified text

TCPDF - PHP class for generating PDF


TCPDF is a PHP class for generating PDF documents without requiring external extensions. TCPDF Supports UTF-8, Unicode, RTL languages, XHTML, Javascript, digital signatures, barcodes and much more.

slinky - Slinky, a high-performance web crawler / text analytics in Python, Redis, Hadoop, R, Gephi


Slinky, a high-performance web crawler / text analytics in Python, Redis, Hadoop, R, Gephi

tts_webcrawler - Text Technologies coursework 1 - web crawler


Text Technologies coursework 1 - web crawler

Norconex HTTP Collector - Enterprise Web Crawler


Norconex HTTP Collector is a full-featured web crawler (or spider) that can manipulate and store collected data into a repositoriy of your choice (e.g. a search engine). It very flexible, powerful, easy to extend, and portable.

Norconex HTTP Collector - A Web Crawler in Java


Norconex HTTP Collector is a web spider, or crawler that aims to make Enterprise Search integrators and developers's life easier. It is Portable, Extensible, reusable, Robots.txt support, Obtain and manipulate document metadata, Resumable upon failure and lot more.