Tutorial: ADO.NET Data Services (with Source Code)

  •        61

Tutorial: ADO.NET Data Services (with Source Code) Find more explications and a more detailed step-by-step guide on my Blog: http://jasondeoliveira.com




Related Projects

Tutorial: LINQ data query language - Example Project

Tutorial: LINQ data query language - Example Project Find detailed explications and a step-by-step guide on my Blog: http://jasondeoliveira.com

giga - Concurrent File I/O on Arbitrarily-Sized Files

make CONCURRENT=false PERFORMANCE=true test```Updating----```bashgit pullgit submodule foreach --recursive git checkout mastergit submodule foreach --recursive git pull```Usage----An example can be found in `tutorial/` and can be executed by running:```bashcd tutorial/make./tutorial.app```All header and library files in the tutorial are symbolically linked to `tutorial/external/giga` -- removing the symoblic link, and cloning `giga` (and installing dependencies, as per above) into the same place

gke-service-accounts-tutorial - A tutorial on using Google Cloud service account with Google Container Engine (GKE)

Applications running on Google Container Engine have access to other Google Cloud Platform services such as Stackdriver Trace and Cloud Pub/Sub. In order to access these services a Service Account must be created and used by client applications.This tutorial will walk you through deploying the echo application which creates Pub/Sub messages from HTTP requests and sends trace data to Stackdriver Trace.

jena-plb-tutorial - A tutorial example for Jena, based around DOAP data

A tutorial example for Jena, based around DOAP data

grails-rest-example - An example grails project for RESTful web services

An example grails project for RESTful web services

i18n-tutorial-example - Example project for i18n-tutorial: pull requests very welcome!

Example project for i18n-tutorial: pull requests very welcome!

myd - The My Data Project enables you import and analyse your data from web services and apps.

The My Data Project enables you import and analyse your data from web services and apps.

spark-parquet-thrift-example - Example Spark project using Parquet as a columnar store with Thrift objects

Apache Spark is a research project for distributed computing which interacts with HDFS and heavily utilizes in-memory caching. Modern datasets contain hundreds or thousands of columns and are too large to cache all the columns in Spark's memory, so Spark has to resort to paging to disk. The disk paging penalty can be lessened or removed if the Spark application only interacts with a subset of the columns in the overall database by using a columnar store database such as Parquet, which will only load the specified columns of data into a Spark RDD.Matt Massie's example uses Parquet with Avro for data serialization and filters loading based on an equality predicate, but does not show how to load only a subset of columns. This project shows a complete Scala/sbt project using Thrift for data serialization and shows how to load columnar subsets.

Project Niagara

An alternative way to supply validation metadata for use in Silverlight projects. Secondary goal of the project is to provide Validation services to ADO.NET Data Services and Web Services to provide parity with RIA Services.

AzureUsageAndBillingPortal - This web application solution enables you to monitor the usage and billing for any Azure subscriptions you have access to

This project is designed to retrieve the usage details of all Azure services in a specific Azure subscription(s). Using PowerBI dashboard, one can retrieve up to date billing data (Azure service usage details) of a subscription that is registered to the system. First, any user with an active azure subscription has to register their subscription with the system through the registration website. To get registered, only the subscription owner's Microsoft ID (i.e. email address that ends with Hotmail.com, outlook.com etc.) or an organization ID (i.e. email address that ends with microsoft.com, fabrikam.com etc.) and subscription’s domain name is needed. Providing this data, a user can register their Azure subscription with the system. Once the subscription is registered, the system automatically retrieves the past 3 years (adjustable parameter) usage data and allows it to be viewed by a PowerBI dashboard. Additionally, every day at UTC 00:00 o’clock, the system automatically retrieves the previous day’s usage and billing data for each registered subscription in the system. By doing this, it keeps all records up to date on a daily basis. There is always a 1-day delay (currently you can’t retrieve the past few hours’ data immediately because the main Azure system collects all data from different datacenters within different time zones) on the data presented through PowerBI.The PowerBI dashboard allows users to filter data in realtime according to different parameters such as: Subscriptions, Services, SubServices, etc.

data-integration - Data Integration using Cloud Services. This is the project of my summer VT

Data Integration using Cloud Services. This is the project of my summer VT

Excel Services PowerPivot Sample Data

SQL Server 2012 sample data for Excel Services PowerPivot Tutorial.


Dilshya PHP MySQL JSON Data services. for installation and tutorial please follow this link. http://www.dilshya.com/?p=1

facebook-example - Example Facebook application built with Chaplin.

This is a small example application built with Chaplin.js, an application structure on top of Backbone.js. It can be used as a boilerplate.This example uses a Facebook application named “Chaplin Example App”. On login, you will be asked to grant access rights to this Facebook app. Of course this app will not post anything to Facebook on your behalf or publish/submit your personal data. You’re free to revoke access rights at any time. You might easily create your own Facebook App and change the app ID in coffee/lib/services/facebook.coffee.


No matter what your domain of interest or expertise, the internet is a treasure trove of useful data that comes in many shapes, forms, and sizes, from beautifully documented fast APIs to data that need to be scraped from deep inside of 1990s html pages. In this 3 hour tutorial you will learn how to programmatically read in various types of web data from experts in the field (Founders of the rOpenSci project and the training lead of RStudio). By the end of the tutorial you will have a basic idea of how to wrap an R package around a standard API, extract common non-standard data formats, and scrape data into tidy data frames from web pages.Background Knowledge Familiarity with base R and ability to write functions.

ACS-Deployment-Tutorial - A tutorial on how to deploy a Dockerised deep learning application on Azure Container Services

Deploying machine learning models can often be tricky due to their numerous dependencies, deep learning models often even more so. One of the ways to overcome this is to use Docker containers. Unfortunately, it is rarely straight-forward. In this tutorial, we will demonstrate how to deploy a pre-trained deep learning model using Azure Container Services, which allows us to orchestrate a number of containers using DC/OS. By using Azure Container Services, we can ensure that it is performant, scalable and flexible enough to accommodate any deep learning framework. The Docker image we will be deploying can be found here. It contains a simple Flask web application with Nginx web server. The deep learning framework we will use is the Microsoft Cognitive Toolkit (CNTK) and we will be using a pre-trained model; specifically the ResNet 152 model.Azure Container Services enables you to configure, construct and manage a cluster of virtual machines pre-configured to run containerized applications. Once the cluster is set up you can use a number of open-source scheduling and orchestration tools, such as Kubernetes and DC/OS. This is ideal for machine learning application since we can use Docker containers which enable us to have ultimate flexibility in the libraries we use and allows us to easily scale up based on demand. While always ensuring that our application remains performant. You can create an ACS through the Azure portal but in this tutorial we will be constructing it using the Azure CLI.

spring-data-jpa-example - This is an example mevan project for spring data AJP

This is an example mevan project for spring data AJP

congress - Congress

Congress is an open policy framework for the cloud. With Congress, a cloud operator can declare, monitor, enforce, and audit "policy" in a heterogeneous cloud environment. Congress gets inputs from a cloud's various cloud services; for example in OpenStack, Congress fetches information about VMs from Nova, and network state from Neutron, etc. Congress then feeds input data from those services into its policy engine where Congress verifies that the cloud's actual state abides by the cloud operator's policies. Congress is designed to work with any policy and any cloud service.Congress's job is to help people manage that plethora of state across all cloud services with a succinct policy language.

WCF Data Service Format Extensions for CSV, TXT

This project add support for Legacy formats like CSV, TXT (CSV Export) to the data service output and allow $format=txt query. By default WCF Data Services support Atom and JSON responses however legacy systems do not understand ATOM or JSON but they understand CSV, TXT f...

wso2-ds-tests - Project for tests in the WSO2 Data Services Server and the several features.

Project for tests in the WSO2 Data Services Server and the several features.