DotNetNuke Amazon S3 Folder Integration

  •        66

A set of providers (data and authorization) that provide Amazon S3 folder services to a DotNetNuke installation in an integrated and seamless manner. This package allows existing modules to utilize externally-persisted S3 files without any modification.

http://dnnamazons3.codeplex.com/

Tags
Implementation
License
Platform

   




Related Projects

git-bigstore - Bigstore is a Git extension that helps you track big files in your repositories.


You probably will want to set up an IAM user to manage the bucket you'll be using to upload your media. Here's an example user policy:```json{ "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1401201989000", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectAcl", "s3:ListBucket", "s3:PutObject", "s3:PutObjectAcl", "s3:AbortMultipartUpload", "s3:ListBucketMultipartUploads", "s3:ListMultipartUploadParts"

legacy


Legacy stores data in S3 per node like so:`/s3-bucket/s3-base-path/{NODE-HOSTNAME}`Legacy stores JSON files with small amounts of information to determine whether a snapshot has already been made for a table in:`/s3-bucket-/s3-base-path/{NODE-HOSTNAME}/.legacy`Backups are stored in the snapshot directory as to make it easier to restore if the need arises.To do------Current tasks:- Selecting specific keyspaces (rather than all of them)- Request a brand new snapshot (via a cli option)- Reduce Memo

C# Library and Code for Amazon S3


An advanced C# library for interfacing with the Amazon S3 system. Among its powerful features are: - Full support for data streaming. No need to load data into memory before sending to S3. - Data encryption. - Thread safety and live statistics. Perform multiple simultaneous up...

wp-s3


WP-S3 copies media files used in your blog post to Amazon S3 cloud. Uses only filters to replace the media urls in the post if media is available in the S3 cloud. Wordpress cron functionality is used for batching media upload to S3. This plugin is very safe and will not modify anything in your database.

mock-s3 - A Python clone of fake-s3, A lightweight server clone of Amazon S3


A Python clone of fake-s3, A lightweight server clone of Amazon S3



s3 - Generate signed S3 URLs, and GET/PUT S3 objects via cURL


Generate signed S3 URLs, and GET/PUT S3 objects via cURL

s3-versioning - Command line tool to enable/suspend s3 versioning on Amazon S3 buckets


Command line tool to enable/suspend s3 versioning on Amazon S3 buckets

s3-bucket-policy - Example S3 Bucket Policy for processing S3 buckets from Packetloop


Example S3 Bucket Policy for processing S3 buckets from Packetloop

Umbraco-S3 - Implementation of S3 for umbraco. Keep all your media in S3.


Implementation of S3 for umbraco. Keep all your media in S3.

s3-upload - S3-Upload is a simple helper for creating a signed S3 upload policy.


S3-Upload is a simple helper for creating a signed S3 upload policy.

s3-proxy - S3-Proxy is a poor man's HTTP Proxy for static web-resources stored on Amazon(tm) S3


S3-Proxy is a poor man's HTTP Proxy for static web-resources stored on Amazon(tm) S3

spree-s3 - Amazon S3 integration for your Spree store, using the aws-s3 gem.


Amazon S3 integration for your Spree store, using the aws-s3 gem.

aws-s3 - AWS-S3 is a Ruby implementation of Amazon's S3 REST API


AWS-S3 is a Ruby implementation of Amazon's S3 REST API

github-s3-deploy - AWS Lambda function, triggered by Github/SNS webhook, to sync new commits in an S3 bucket


If you have a static web site hosted in an S3 bucket, and you version control that site using Github, this script (and its associated GitHub / AWS configurations) will take new commits to your repo and sync them into your S3 bucket.For new repositories, you should first set up the webooks, SNS queues, etc. before pushing any code. This will take your first commit and move all code into place. For existing repositories / s3 buckets, make sure your repo and your bucket are in sync before continuing.

teamcity-s3-artifact-storage-plugin - TeamCity plugin which allows replacing the TeamCity built-in artifacts storage with AWS S3


This plugin allows replacing the TeamCity built-in artifacts storage with AWS S3. The artifacts storage can be changed at the project level. After changing the storage, new artifacts produced by the builds of this project will be published to the (specified) AWS S3 bucket. Besides publishing, the plugin also implements resolving of artifact dependencies and clean-up of build artifacts.Baseline functionality finished. Feedback wanted.

akubra - Simple solution to keep a independent S3 storages in sync


Akubra is a simple solution to keep an independent S3 storages in sync - almost realtime, eventually consistent.Keeping synchronized storage clusters, which handles great volume of new objects (about 300k obj/h), is the most efficient by feeding them with all incoming data at once. That's what Akubra does, with a minimum memory and cpu footprint.

mycroft


Mycroft is an orchestrator that coordinates MRJob, S3, and Redshift to automatically perform light transformations on daily log data. Just specify a cluster, schema version, s3 path, and start date, and Mycroft will watch S3 for new data, transforming and loading data without user action. More specifically Mycroft will take json data stored in S3 and map it to a format that can be copied into Redshift using a schema you define. The results of that map are stored back into S3, then loaded into Redshift. Mycroft's web interface can be used to monitor the progress of in-flight data loading jobs, and to pause, resume, cancel or delete existing jobs. Mycroft will notify via email when new data is successfully loaded or if any issues arise. It also provides tools to automatically generate schemas from log data, and even manages the expiration of old data as well as vacuuming and analyzing data.Mycroft is comprised of three services: an API, worker and scanner. The API is used to add jobs, control them and track their progress. The worker is used to run jobs, and the scanner is used to monitor the tables Mycroft uses to store the job metadata, and insert jobs into an SQS queue for the worker.

s3mysqldump - Dump mysql tables to s3, and parse them


s3mysqldump is a tool to dump mysql tables to S3, so they can be consumed by Elastic MapReduce, etc.The following command dumps 'user' table in 'db' database to s3 bucket s3://emr-storage/. 'my.cnf' specifies mysql parameters. 'boto.cfg' is the configure file for s3 connection which specifies things like aws credentials etc.

S3 Browser for Windows Live Writer


A Windows Live Writer plugin for browsing your Amazon S3 account and insterting references to objects in S3 buckets.

Standalone Windows .EXE command line utility for Amazon S3 & EC2


A Windows command-line utility for Amazon's S3 & EC2 web services that requires no installation, is a single .EXE file with no DLLs, and requires only .NET 2.0 or Mono, so will work on a plain vanilla Windows 2003 installation.