Displaying 1 to 8 from 8 results

node-lru-cache

  •    Javascript

A cache object that deletes the least-recently-used items.If you put more stuff in it, then items will fall out.

LeaderF - An asynchronous fuzzy finder which is used to quickly locate files, buffers, mrus, tags, etc

  •    Python

This plugin is mainly used for locating files, buffers, mrus, tags in large project. To install this plugin just put the plugin files in your ~/.vim (Linux) or ~/vimfiles (Windows). For Vundle user, just add Plugin 'Yggdroot/LeaderF' to your .vimrc.

hyperlru - Tiny & Fast LRU Implementation as possible.

  •    Javascript

Tiny & Fast LRU Implementation as possible. There are a lot of LRU implementations, but most of them have a poor perfomance and they are hard to understand.




tmp-cache - A least-recently-used cache manager in 35 lines of code~!

  •    Javascript

LRU caches operate on a first-in-first-out queue. This means that the first item is the oldest and will therefore be deleted once the max limit has been reached. Aside from the items & changes mentioned below, tmp-cache extends the Map class, so all properties and methods are inherited.

redismru.vim - MRU plugin build for speed with async IO operation

  •    Vim

Most recently used files plugin for vim8 & Neovim using Redis and job-control feature. Redismru also perform MRU list load on CursorHold, this makes work with multiply vim instances easier.

fzf-filemru - File MRU with fzf.vim

  •    Shell

Vim plugin that tracks your most recently and frequently used files while using the fzf.vim plugin. This plugin provides the FilesMru and ProjectMru commands, which are basically a pass-throughs to the Files command. So, all you really need to do is use FilesMru instead of Files.

bicache - A hybrid MFU / MRU, TTL, sharded cache for Go (aka LFU / LRU)

  •    Go

Bicache is a sharded hybrid MFU/MRU, TTL optional, general purpose cache for Go. Pure MRU (LRU eviction) caches are great because they're fast and incredibly simple; items that are used often generally remain in the cache. One downside is that large, sequential scan where the number of misses exceeds the MRU cache size causes highly used (and perhaps the most useful) data to be evicted in favor of recent data. A MFU cache makes the distinction of item value based on access frequency rather than recency. This means that valuable keys are insulated from large scans of potentially less valuable data.