node-jpickle - Full-javascript parser for Python's pickle format

  •        17

Full-javascript parser for Python's pickle format. To handle more complex objects from jpickle the Javascript objects first need to be registered with the the module. For most basic cases these can just be empty objects that are mapped to a a python class name. If the type is not registered with the emulated member then the unpickle will fail with an exception.

https://github.com/jlaine/node-jpickle

Tags
Implementation
License
Platform

   




Related Projects

sqlitedict - Persistent dict, backed by sqlite3 and pickle, multithread-safe.

  •    Python

Pickle is used internally to (de)serialize the values. Keys are arbitrary strings, values arbitrary pickle-able objects. Values can be any picklable objects (uses cPickle with the highest protocol).

pickle - PHP Extension installer

  •    PHP

Pickle installs PHP extensions easily on all platforms. Pickle is a new PHP extension installer. It is based on Composer and the plan is to get Composer to fully support it. See https://github.com/composer/composer/pull/2898#issuecomment-48439196 for the Composer part of the discussions.

PPL - The Pickle Programming Language

  •    C

The Pickle Programming Language (PPL) and related utilities.

picKLE

  •    PHP

picKLE is an image gallery system created in PHP. It generates thumbnails and resampled images on the fly and caches them. It is made to be extremely simple to install/configure.

Pickle

  •    CSharp

A compilation project where any useful code I write or find will be incorporated. Currently it has some extension methods and unit testing. Why is this project called pickle? Because my last name is Dill! Please visit my blog @ http://weblogs.asp.net/bdill.


Pickle: The Penguin Client Library

  •    PHP

Pickle is an open source PHP library which eases the development of third party game clients for the massively multiplayer online flash game Club Penguin.

shadow - jemalloc heap exploitation framework

  •    Python

Apart from the tool's source code, this repository also includes documentation on setting up an Android userland debugging environment for utilizing shadow, a quick overview of Android's jemalloc structures using shadow, and some notes on how double, unaligned and arbitrary free() bugs behave on Android's jemalloc. When you issue a jemalloc-specific command for the first time, shadow parses all jemalloc metadata it knows about and saves them to a Python pickle file. Subsequent commands use this pickle file instead of parsing the metadata from memory again in order to be faster.

pickle - Easy model creation/reference in cucumber - optionally leveraging your factories/blueprints

  •    Ruby

Easy model creation/reference in cucumber - optionally leveraging your factories/blueprints

Crap Factor Z

  •    

CFZ is a random curse generator written in python, featuring a nifty pickle data structure editor, and an X interface using Tkinter or wxpython. Soon I'll re-write it in C++

Python DB-API 2.0 module for ADO

  •    Python

Python module that makes it easy to use Microsoft ADO for connecting with databases and other data sources. -- This module is included as part of pywin32. Download here for IronPython or to update. Unzip amp; use setup.py to install on all platforms. **** The 2.5 version has a Linux compatible REMOTE access module. * Read quick_reference.odt from the zip for documentation. [Note: Pyro4 version 2.20(+) must be set for PYRO_SERIALIZER=pickle for adodbapi.server to work correctly.]

Trusted Pickle - Python module

  •    Python

TrustedPickle is a Python module which lets you create and sign your data files. By using public/private key techniques, this module protects your users from loading malicious data files that others might claim you created. LEGAL FOR EXPORT.

headlines - Automatically generate headlines to short articles

  •    Jupyter

It is assumed that you already have training and test data. The data is made from many examples (I'm using 684K examples), each example is made from the text from the start of the article, which I call description (or desc), and the text of the original headline (or head). The texts should be already tokenized and the tokens separated by spaces. Once you have the data ready save it in a python pickle file as a tuple: (heads, descs, keywords) were heads is a list of all the head strings, descs is a list of all the article strings in the same order and length as heads. I ignore the keywrods information so you can place None.

Tgres - Time Series in PostgreSQL

  •    Go

Tgres is a tool for receiving and reporting on simple time series written in Go which uses PostgreSQL for storage. Tgres can receive data using Graphite Text, UDP and Pickle protocols, as well as Statsd (counters, gauges and timers). It supports enough of a Graphite HTTP API to be usable with Grafana. Tgres implements the majority of the Graphite functions.

asar - Simple extensive tar-like archive format with indexing

  •    Javascript

Asar is a simple extensive archive format, it works like tar that concatenates all files together without compression, while having random access support. You can pass in a transform option, that is a function, which either returns nothing, or a stream.Transform. The latter will be used on files that will be in the .asar file to transform them (e.g. compress).

boopickle - Binary serialization library for efficient network communication

  •    Scala

To use it in your code, simply import the Default object contents. All examples in this document assume this import is present. To serialize (pickle) something, just call Pickle.intoBytes with your data. This will produce a binary ByteBuffer containing an encoded version of your data.

cloudpickle - Extended pickling support for Python objects

  •    Python

cloudpickle makes it possible to serialize Python constructs not supported by the default pickle module from the Python standard library. cloudpickle is especially useful for cluster computing where Python code is shipped over the network to execute on remote hosts, possibly close to the data.

redis_failover - redis_failover is a ZooKeeper-based automatic master/slave failover solution for Ruby

  •    Ruby

redis_failover provides a full automatic master/slave failover solution for Ruby. Redis does not currently provide an automatic failover capability when configured for master/slave replication. When the master node dies, a new master must be manually brought online and assigned as the slave's new master. This manual switch-over is not desirable in high traffic sites where Redis is a critical part of the overall architecture. The existing standard Redis client for Ruby also only supports configuration for a single Redis server. When using master/slave replication, it is desirable to have all writes go to the master, and all reads go to one of the N configured slaves. This gem (built using ZK) attempts to address these failover scenarios. One or more Node Manager daemons run as background processes and monitor all of your configured master/slave nodes. When the daemon starts up, it automatically discovers the current master/slaves. Background watchers are setup for each of the redis nodes. As soon as a node is detected as being offline, it will be moved to an "unavailable" state. If the node that went offline was the master, then one of the slaves will be promoted as the new master. All existing slaves will be automatically reconfigured to point to the new master for replication. All nodes marked as unavailable will be periodically checked to see if they have been brought back online. If so, the newly available nodes will be configured as slaves and brought back into the list of available nodes. Note that detection of a node going down should be nearly instantaneous, since the mechanism used to keep tabs on a node is via a blocking Redis BLPOP call (no polling). This call fails nearly immediately when the node actually goes offline. To avoid false positives (i.e., intermittent flaky network interruption), the Node Manager will only mark a node as unavailable if it fails to communicate with it 3 times (this is configurable via --max-failures, see configuration options below). Note that you can (and should) deploy multiple Node Manager daemons since they each report periodic health reports/snapshots of the redis servers. A "node strategy" is used to determine if a node is actually unavailable. By default a majority strategy is used, but you can also configure "consensus" or "single" as well.

node-gyp - Node.js native addon build tool

  •    Python

node-gyp is a cross-platform command-line tool written in Node.js for compiling native addon modules for Node.js. It bundles the gyp project used by the Chromium team and takes away the pain of dealing with the various differences in build platforms. It is the replacement to the node-waf program which is removed for node v0.8. If you have a native addon for node that still has a wscript file, then you should definitely add a binding.gyp file to support the latest versions of node.Multiple target versions of node are supported (i.e. 0.8, ..., 4, 5, 6, etc.), regardless of what version of node is actually installed on your system (node-gyp downloads the necessary development files or headers for the target version).

node-fibers - Fiber/coroutine support for v8 and node.

  •    C++

Fibers, sometimes called coroutines, are a powerful tool which expose an API to jump between multiple call stacks from within a single thread. This can be useful to make code written for a synchronous library play nicely in an asynchronous environment. Note: node-fibers uses node-gyp for building. To manually invoke the build process, you can use node-gyp rebuild. This will put the compiled extension in build/Release/fibers.node. However, when you do require('fibers'), it will expect the module to be in, for example, bin/linux-x64-v8-3.11/fibers.node. You can manually put the module here every time you build, or you can use the included build script. Either npm install or node build -f will do this for you. If you are going to be hacking on node-fibers, it may be worthwhile to first do node-gyp configure and then for subsequent rebuilds you can just do node-gyp build which will be faster than a full npm install or node-gyp rebuild.

nodock - Docker Compose for Node projects with Node, MySQL, Redis, MongoDB, NGINX, Apache2, Memcached, Certbot and RabbitMQ images

  •    Shell

The docker Node.js image is very simple, you give it an entrypoint and it runs it. This is fine for very simple/small scripts but for larger projects you'll probably want something a bit more robust. The goal of NoDock is to provide a complete environment for your node project: Node.js service(s), databases, web servers, queues, etc. while doing the "wiring" for you.