spark - Firely and Incendi's open source FHIR server

  •        205

Spark is a public domain FHIR server developed in C#, initially built by Firely and as of recently being maintained by Incendi. Spark implements a major part of the FHIR specification and has been used and tested during several HL7 WGM Connectathons.

https://github.com/FirelyTeam/spark

Tags
Implementation
License
Platform

   




Related Projects

Synthea - Synthetic Patient Population Simulator

  •    Java

Synthea is a Synthetic Patient Population Simulator. The goal is to output synthetic, realistic (but not real), patient data and associated health records in a variety of formats.

hapi-fhir-jpaserver-starter

  •    Java

This project is a complete starter project you can use to deploy a FHIR server using HAPI FHIR JPA. This will run the docker image with the default configuration, mapping port 8080 from the container to port 8080 in the host. Once running, you can access http://localhost:8080/ in the browser to access the HAPI FHIR server's UI or use http://localhost:8080/fhir/ as the base URL for your REST requests.

fhir-net-api - The official .NET API for HL7 FHIR

  •    CSharp

This is the official support API for working with HL7 FHIR on the Microsoft .NET (dotnet) platform. Planned release DSTU2.1 was never published by HL7, but you will still find traces of it, in particular we still keep the NuGet package for it available.

firely-net-sdk - The official Firely .NET SDK for HL7 FHIR

  •    CSharp

This is the official support SDK for working with HL7 FHIR on the Microsoft .NET (dotnet) platform. Read the releases notes on firely-net-sdk/releases. You can find documentation about this SDK in the Firely docs site.


Snow Owl - Scalable, open source terminology server (SNOMED CT, ICD-10, LOINC, dm+d, ATC and others)

  •    Java

Snow Owl is a highly scalable, open source terminology server with revision-control capabilities and collaborative authoring platform features. It allows you to store, search and author high volumes of terminology artifacts quickly and efficiently. It can maintain multiple versions (including unpublished and published) for each terminology artifact and provides APIs to access them all. It provides Full SNOMED CT terminology support.

spark-jobserver - REST job server for Apache Spark

  •    Scala

spark-jobserver provides a RESTful interface for submitting and managing Apache Spark jobs, jars, and job contexts. This repo contains the complete Spark job server project, including unit tests and deploy scripts. It was originally started at Ooyala, but this is now the main development repo. Other useful links: Troubleshooting, cluster, YARN client, YARN on EMR, Mesos, JMX tips.

spark-jobserver - REST job server for Spark

  •    Scala

spark-jobserver provides a RESTful interface for submitting and managing Apache Spark jobs, jars, and job contexts. This repo contains the complete Spark job server project, including unit tests and deploy scripts.You need to have SBT installed.

sparkmagic - Jupyter magics and kernels for working with remote Spark clusters

  •    Python

Sparkmagic is a set of tools for interactively working with remote Spark clusters through Livy, a Spark REST server, in Jupyter notebooks. The Sparkmagic project includes a set of magics for interactively running Spark code in multiple languages, as well as some kernels that you can use to turn Jupyter into an integrated Spark environment. There are two ways to use sparkmagic. Head over to the examples section for a demonstration on how to use both models of execution.

spark - .NET for Apache® Spark™ makes Apache Spark™ easily accessible to .NET developers.

  •    CSharp

.NET for Apache Spark provides high performance APIs for using Apache Spark from C# and F#. With these .NET APIs, you can access the most popular Dataframe and SparkSQL aspects of Apache Spark, for working with structured data, and Spark Structured Streaming, for working with streaming data. .NET for Apache Spark is compliant with .NET Standard - a formal specification of .NET APIs that are common across .NET implementations. This means you can use .NET for Apache Spark anywhere you write .NET code allowing you to reuse all the knowledge, skills, code, and libraries you already have as a .NET developer.

docker-spark

  •    Shell

This repository contains a Docker file to build a Docker image with Apache Spark. This Docker image depends on our previous Hadoop Docker image, available at the SequenceIQ GitHub page. The base Hadoop Docker image is also available as an official Docker image. There are two deploy modes that can be used to launch Spark applications on YARN.

Spark - Cross-platform real-time collaboration client optimized for business and organizations.

  •    Java

Spark is an Open Source, cross-platform IM client optimized for businesses and organizations. It features built-in support for group chat, telephony integration, and strong security. It also offers a great end-user experience with features like in-line spell checking, group chat room bookmarks, and tabbed conversations. Combined with the Openfire server, Spark is the easiest and best alternative to using un-secure public IM networks.

aws-serverless-java-container - A Java wrapper to run Spring, Jersey, Spark, and other apps inside AWS Lambda

  •    Java

The aws-serverless-java-container is collection of interfaces and their implementations that let you run Java application written with frameworks such as Jersey or Spark in AWS Lambda.The library contains a core artifact called aws-serverless-java-container-core that defines the interfaces and base classes required as well as default implementation of the Java servlet HttpServletRequest and HttpServletResponse. The library also includes two initial implementations of the interfaces to support Jersey apps (aws-serverless-java-container-jersey) and Spark (aws-serverless-java-container-spark).

Mobius - C# and F# language binding and extensions to Apache Spark

  •    CSharp

Mobius provides C# language binding to Apache Spark enabling the implementation of Spark driver program and data processing operations in the languages supported in the .NET framework like C# or F#.For more code samples, refer to Mobius\examples directory or Mobius\csharp\Samples directory.

snappydata - SnappyData - The Spark Database. Stream, Transact, Analyze, Predict in one cluster

  •    Scala

Apache Spark is a general purpose parallel computational engine for analytics at scale. At its core, it has a batch design center and is capable of working with disparate data sources. While this provides rich unified access to data, this can also be quite inefficient and expensive. Analytic processing requires massive data sets to be repeatedly copied and data to be reformatted to suit Spark. In many cases, it ultimately fails to deliver the promise of interactive analytic performance. For instance, each time an aggregation is run on a large Cassandra table, it necessitates streaming the entire table into Spark to do the aggregation. Caching within Spark is immutable and results in stale insight. At SnappyData, we take a very different approach. SnappyData fuses a low latency, highly available in-memory transactional database (GemFireXD) into Spark with shared memory management and optimizations. Data in the highly available in-memory store is laid out using the same columnar format as Spark (Tungsten). All query engine operators are significantly more optimized through better vectorization and code generation. The net effect is, an order of magnitude performance improvement when compared to native Spark caching, and more than two orders of magnitude better Spark performance when working with external data sources.

docker-spark - Docker build for Apache Spark

  •    

A debian:stretch based Spark container. Use it in a standalone cluster with the accompanying docker-compose.yml, or as a base for more complex recipes.

spark-nlp - Natural Language Understanding Library for Apache Spark.

  •    Jupyter

John Snow Labs Spark-NLP is a natural language processing library built on top of Apache Spark ML. It provides simple, performant & accurate NLP annotations for machine learning pipelines, that scale easily in a distributed environment. This library has been uploaded to the spark-packages repository https://spark-packages.org/package/JohnSnowLabs/spark-nlp .

spark-cassandra-connector - DataStax Spark Cassandra Connector

  •    Scala

Lightning-fast cluster computing with Apache Spark™ and Apache Cassandra®.This library lets you expose Cassandra tables as Spark RDDs, write Spark RDDs to Cassandra tables, and execute arbitrary CQL queries in your Spark applications.






We have large collection of open source products. Follow the tags from Tag Cloud >>


Open source products are scattered around the web. Please provide information about the open source projects you own / you use. Add Projects.