Displaying 1 to 14 from 14 results

vue-meta - Manage page meta info in Vue 2.0 components. SSR + Streaming supported.

  •    Javascript

vue-meta is a Vue 2.0 plugin that allows you to manage your app's meta information, much like react-helmet does for React. However, instead of setting your data as props passed to a proprietary component, you simply export it as part of your component's data using the metaInfo property. These properties, when set on a deeply nested component, will cleverly overwrite their parent components' metaInfo, thereby enabling custom info for each top-level view as well as coupling meta info directly to deeply nested subcomponents for more maintainable code.

laravel-sitemap - Create and generate sitemaps with ease

  •    PHP

This package can generate a sitemap without you having to add urls to it manually. This works by crawling your entire site. The generator has the ability to execute JavaScript on each page so links injected into the dom by JavaScript will be crawled as well.

vue-head - Manager the meta information of the head tag, a simple and easy way

  •    Javascript

If you'd like to set default custom title options on every component you can pass options to VueHead when you're registering it for Vue, just like in example below.




schema-org - A fluent builder Schema.org types and ld+json generator

  •    PHP

spatie/schema-org provides a fluent builder for all Schema.org types and their properties. The code in src is generated from Schema.org's RFDa standards file, so it provides objects and methods for the entire core vocabulary. The classes and methods are also fully documented as a quick reference. All types can be instantiated though the Spatie\SchemaOrg\Schema factory class, or with the new keyword.

Crawlme - Ajax crawling for your web application

  •    Javascript

#Crawlme A Connect/Express middleware that makes your node.js web application indexable by search engines. Crawlme generates static HTML snapshots of your JavaScript web application on the fly and has a built in periodically refreshing in-memory cache, so even though the snapshot generation may take a second or two, search engines will get them really fast. This is beneficial for SEO since response time is one of the factors used in the page rank algorithm.Making ajax applications crawlable has always been tricky since search engines don't execute the JavaScript on the web sites they crawl. The solution to this is to provide the search engines with pre-rendered HTML versions of each page on your site, but creating those HTML versions has until now been a tedious and error prone process with many manual steps. Crawlme fixes this by rendering HTML snapshots of your web application on the fly whenever the Googlebot crawls your site. Apart from making the process of more or less manually creating indexable HTML versions of your site obsolete, this also has the benefit that Google will always index the latest version of your site and not some old pre-rendered version.

node-pagerank - Node

  •    Javascript

Google removed public access to PageRank scores, so this library is no longer functional. Tool for finding the Google PageRank of a given url. Can be used as a node module or a cli command when installed with -g.


php-sitemap - Standalone sitemap builder 100% standards compliant.

  •    PHP

Builds sitemaps for pages, images and media files and provides a class to submit them to search engines. This component builds sitemaps supported by the main search engines, Google and Bing, in xml and gzip formats.

laravel-robots-middleware - Enable or disable the indexing of your app

  •    PHP

A tiny, opinionated package to enable or disable indexing your site via a middleware in Laravel. Spatie is a webdesign agency based in Antwerp, Belgium. You'll find an overview of all our open source projects on our website.

googlebot - Express middleware that returns the resulting html after executing javascript, allowing crawlers to read on the page

  •    Javascript

This module implements a middleware for express that allows to render a full Html/JS/Css version of a page when JS is not available in the client and the site relies heavily on it to render the site, like when using ember/angular/jquery/backbone; I needed to code this for work to be able to deliver a SEO friendly version of the website to the Google Crawler, and found no solution available. Google will replace the hashbang (or the url) with ?_escaped_fragment_= and append the rest of the url there and expects a different, completely rendered version of the site, the middleware will realize when the request has this and instead of retrieving the normal response it will return the full rendered version that phantomJS creates.

gulp-sitemap - Generate a search engine friendly sitemap.xml using a Gulp stream

  •    Javascript

Easily generate a search engine friendly sitemap.xml from your project. Search engines love the sitemap.xml and it helps SEO as well.

crawlable - Crawlable is a way to render your web application as a static web site

  •    Javascript

Crawlable could be your solution ! It is able to render your dynamic client side stuffs written with javascript, on the server side. By this way, it can give a static cached html to your client, before any javascript code started to be executed on the web page. You may say now, "ok, but what if I have cached some dynamic content which could be updated at every time !?".

SerpScraper - A smart, browser-like scraper built to extract search results from Google and Bing.

  •    PHP

Google Search no longer uses its image-based captcha. It has now moved on to its new reCAPTCHA v2 which makes it very difficult for robots and scripts to bypass. We're looking for a solution. Stay tuned. The purpose of this library is to provide an easy, undetectable, and captcha resistant way to extract data from all major search engines such as Google and Bing.