Displaying 1 to 20 from 20 results

node-cache - A simple in-memory cache for nodejs

  •    Javascript

A simple in-memory cache. put(), get() and del()

ps_mem - A utility to accurately report the in core memory usage for a program

  •    Python

Yes the name is a bit weird. coremem would be more appropriate, but for backwards compatible reasons the ps_mem name remains. pip install ps_mem is supported, or rpm and deb packages are available for most distros. Also the ps_mem.py script can be run directly.

StreamSaver.js - StreamSaver writes stream to the filesystem directly asynchronous

  •    HTML

StreamSaver.js is the solution to saving streams on the client-side. It is perfect for webapps that need to save really large amounts of data created on the client-side, where the RAM is really limited, like on mobile devices.There is not any magical saveAs() function that saves a stream, file or blob. The way we mostly save Blobs/Files today is with the help of a[download] attribute FileSaver.js takes advantage of this and create a convenient saveAs(blob, filename) function, very fantastic, but you can't create a objectUrl from a stream and attach it to a link...

node-usage - process usage lookup with nodejs

  •    Javascript

But If you call usage.lookup() continuously for a given pid, you can turn on keepHistory flag and you'll get the CPU usage since last time you track the usage. This reflects the current CPU usage.




sympact - 🔥 Simple stupid CPU/MEM "Profiler" for your JS code.

  •    Javascript

🔥 An easy way to calculate the 'impact' of running a task in Node.JS Coded with ❤️ by Simone Primarosa. Sympact runs a script and profiles its execution time, CPU usage, and memory usage. Sympact then returns an execution report containing the averages of the results.

node-kiwf - kill it with fire, in-process node

  •    Javascript

in-process node.js process kill-switch, forces node processes to crash based on certain restrictions like memory usage or uptime.

tmux-cpu - Display CPU usage in your tmux status bar or in the terminal.

  •    Javascript

Display CPU usage in your tmux status bar or in the terminal. You'll need to install both tmux-cpu and tmux-mem with npm install -g for this to work.


ram-policy-editor - AliCloud RAM Policy Editor for OSS

  •    Javascript

Visual RAM Policy Editor for OSS. When the EnablePath option is selected, these permissions are granted automatically.

recurrent-visual-attention - A PyTorch Implementation of "Recurrent Models of Visual Attention"

  •    Python

This is a PyTorch implementation of Recurrent Models of Visual Attention by Volodymyr Mnih, Nicolas Heess, Alex Graves and Koray Kavukcuoglu. The Recurrent Attention Model (RAM) is a recurrent neural network that processes inputs sequentially, attending to different locations within the image one at a time, and incrementally combining information from these fixations to build up a dynamic internal representation of the image.

EOS-Proxy-Token - Proxy token to allow mitigating EOSIO Ram exploit

  •    WebAssembly

If you want to be sure the person you're sending tokens to can't lock up your RAM send it through the safetransfer account and add the user you want to send tokens to as the memo. IMPORTANT: Do not interact with dapps through this. If you do they will act as if they are interacting with this contract, and not with you.

resusage - D library for getting system and process resource usage

  •    D

Obtaining of virtual memory, RAM and CPU usage by the whole system or by single process. Currently works on Linux and Windows.

gram - A 64bit-TinyRAM simulator in Go

  •    Go

gram is a package for simulating TinyRAM (http://www.scipr-lab.org/specs.html) written in GO. [3] Ben-Sasson, Eli, et al. "SNARKs for C: Verifying program executions succinctly and in zero knowledge." Advances in Cryptology–CRYPTO 2013. Springer, Berlin, Heidelberg, 2013. 90-108.

ram_modified - "Recurrent Models of Visual Attention" in TensorFlow

  •    Python

This project is modified version of https://github.com/jlindsey15/RAM. The critical problem of last implemetnation is that the location network cannot learn because of tf.stop_gradient implementation so that they got just '94% accuracy'. It seems relatively bad compared to the result of paper. If 'tf.stop_gradient' was commented, the classification result was very bad. The reason I think is that the problem is originated from sharing the gradient flow through location, core, glimpse network. Through gradient sharing, gradients of classification part are corrupted by gradients of reinforcement part so that classification result become very bad. (If someone want to share gradient, the weighted loss should be needed. please refer https://arxiv.org/pdf/1412.7755.pdf) According to their post research, 'Multiple Object Recognition with Visual Attention' (https://arxiv.org/pdf/1412.7755.pdf) they softly separate location network and others through multi-layer RNN. From this, I assume that sharing the gradient through whole network is not a good idea so separate them, and finally got a good result. In summary, the learning stretegy is as follow. location network, baseline network : learn with gradients of reinforcement learning only.

profmem - 🔧 R package: profmem - Simple Memory Profiling for R

  •    R

The profmem() function of the profmem package provides an easy way to profile the memory usage of an R expression. It logs all memory allocations done in R. Profiling memory allocations is helpful when we, for instance, try to understand why a certain piece of R code consumes more memory than expected. From this, we find that 4040 bytes are allocated for integer vector x, which is because each integer value occupies 4 bytes of memory. The additional 40 bytes are due to the internal data structure used for each variable R. The size of this allocation can also be confirmed by the value of object.size(x). We also see that rnorm(), which is called via matrix(), allocates 80040 + 2544 bytes, where the first one reflects the 10000 double values each occupying 8 bytes. The second one reflects some unknown allocation done internally by the native code that rnorm() uses. Finally, the following entry reflects the memory allocation of 80040 bytes done by matrix() itself.

Cache - Simple implementation of cache using VHDL

  •    VHDL

This project is intended to create a sample and simple implementation of the cache and RAM using VHDL. This code is compiled using GHDL - the open source compiler for VHDL.