dV2t Enterprise Library

  •        69

dV2t Enterprise Library: Data, Cache, Security, Utilities, ... Use .NET Framework 2.0 or lates

http://dv2tentlib.codeplex.com/

Tags
Implementation
License
Platform

   




Related Projects

dV2t Translator


dV2t Translator (Using Bing Translator, Google Translator)

Bagri - XML/Document DB on top of distributed cache


Bagri is a Document Database built on top of distributed cache solution like Hazelcast or Coherence. The system allows to process semi-structured schema-less documents and perform distributed queries on them in real-time. It scales horizontally very well with use of data sharding, when all documents are distributed evenly between distributed cache partitions.

membase - distributed key-value database


Membase is an distributed, key-value database management system optimized for storing data behind interactive web applications. These applications must service many concurrent users, creating, storing, retrieving, aggregating, manipulating and presenting data in real-time. Supporting these requirements, membase processes data operations with quasi-deterministic low latency and high sustained throughput.

cache-js - A javascript module to cache json data (or any data) using a Web SQL database.


A javascript module to cache json data (or any data) using a Web SQL database.

GUN - A realtime, decentralized, offline-first, graph database engine


GUN is a realtime, distributed, offline-first, graph database engine. Lightweight and powerful. GUN does state synchronization out of the box. It is peer-to-peer by design, meaning you have no centralized database server to maintain. It has offline support, works even without internet. Users can save data offline and when when the network comes back online GUN will automatically synchronize the data.



Apache Ignite - High performance in-memory data grid


Apache Ignite In-Memory Data Fabric is a high-performance, integrated and distributed in-memory platform for computing and transacting on large-scale data sets in real-time, orders of magnitude faster than possible with traditional disk-based or flash technologies.

StackQueryTest - Testing database vs cache based querying of StackOverflow data


Testing database vs cache based querying of StackOverflow data

Memcached - distributed object caching system


Memcached is high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.

MonetDB


MonetDB is a high-performance SQL- and XQuery- column-store database management system with automatic index management, flexible optimizer infrastructure, and programmable backend functionality.

BizziBiz-BasecampCache


Basecamp Cache is a system to locally cache all Basecamp data to a MySQL database. The benefits of using a local MySQL cache are you can do more complex queries in a much quicker fashion. Due to the current structure of Basecamp's XML files, cross relating data can be time prohibitive.

libgibsonclient - Gibson cache server native client library.


Gibson is a high efficiency, tree based memory cache server. It is not meant to replace a database, since it was written to be a key-value store to be used as a cache server, but it's not the usual cache server. Normal key-value stores ( memcache, redis, etc ) uses a hash table as their main data structure, so every key is hashed with a specific algorithm and the resulting hash is used to identify the given value in memory. This approach, although very fast, doesn't allow the user to execute glo

bigcache - Efficient cache for gigabytes of data written in Go.


Fast, concurrent, evicting in-memory cache written to keep big number of entries without impact on performance. BigCache keeps entries on heap but omits GC for them. To achieve that operations on bytes arrays take place, therefore entries (de)serialization in front of the cache will be needed in most use cases.

BangDB - NoSQL for Real Time Performance


Bangdb is pure vanilla key value nosql data store. The goal of bangdb is to be fast, reliable, robust, scalable and easy to use data store for various data management services required by applications. Bangdb comes in flavors like Embedded In memory, Network, Distributed data grid/ Elastic Cache. The bangdb is highly concurrent and runs parallel operations as much as possible.

Apache Geode - Distributed, In-memory Database for Scale-Out Applications


Apache Geode is distributed, in-memory database for scale-out applications. All data is stored in-memory for low latency. Performance scales linearly as nodes are added. Data is distributed automatically between nodes to optimize performance. Clusters fail-over to other nodes in case of failures, and rebalance remaining resources. Geode servers can be configured to talk memcached protocol.

java data object persistence in file


Make java data object persistent in file system whithout database. Between serialization in file and database. Cache your data model in file. Manage more objects than memory can contains

spark-parquet-thrift-example - Example Spark project using Parquet as a columnar store with Thrift objects


Apache Spark is a research project for distributed computing which interacts with HDFS and heavily utilizes in-memory caching. Modern datasets contain hundreds or thousands of columns and are too large to cache all the columns in Spark's memory, so Spark has to resort to paging to disk. The disk paging penalty can be lessened or removed if the Spark application only interacts with a subset of the columns in the overall database by using a columnar store database such as Parquet, which will only load the specified columns of data into a Spark RDD.Matt Massie's example uses Parquet with Avro for data serialization and filters loading based on an equality predicate, but does not show how to load only a subset of columns. This project shows a complete Scala/sbt project using Thrift for data serialization and shows how to load columnar subsets.

ChaiDB


ChaiDB is an embedded data storage developed at the kernel level by using B-Tree implementation. It is a natural database choice for name/value applications such as JSON. JSON-Cache utilizes ChaiDB to provide persistence and cache solution for JSON data.

dV2t 8Queen Demo


dV2t 8Queen Demo: + Back Tracking Algorithm + Back Jumping Algorithm + Back Marking Algorithm + Back Marking & Jumping Algorithm + Forward Checking Algorithm + Dynamic Search Rearrangement Algorithm

LucidDB - RDBMS built entirely for Data Warehousing and Business Intelligence


LucidDB is the RDBMS built entirely for data warehousing and business intelligence. It is based on architectural cornerstones such as column-store, bitmap indexing, hash join/aggregation, and page-level multi versioning. Every component of LucidDB was designed with the requirements of flexible, high-performance data integration and sophisticated query processing in mind.

perl-Cache-DB_File - Cache::DB_File - memory cache which, when full, swaps to DB_File database


Cache::DB_File - memory cache which, when full, swaps to DB_File database