This module provides comprehensive, fast, and simple image processing and manipulation capabilities.There are no external runtime dependencies, which means you don't have to install anything else on your system.
image buffer manipulate process resize scale rotate jpeg jpg png gif crop blur sharpen batch flip mirror border padding hue saturation lightness alpha transparency fade opacity contain coverIf you like this project, and you would like to have more plans and providers in the comparison, please take a look at this issue. A comparison between some VPS providers that have data centers located in Europe.
vps ansible comparison hosting reproducibility transparency independence cloudA number of features or background services communicate with Google servers despite the absence of an associated Google account or compiled-in Google API keys. Furthermore, the normal build process for Chromium involves running Google's own high-level commands that invoke many scripts and utilities, some of which download and use pre-built binaries provided by Google. Even the final build output includes some pre-built binaries. Fortunately, the source code is available for everything. ungoogled-chromium should not be considered a fork of Chromium. The main reason for this is that a fork is associated with more significant deviations from the Chromium, such as branding, configuration formats, file locations, and other interface changes. ungoogled-chromium will not modify the Chromium browser outside of the project's goals.
chromium inox iridium privacy control transparency google-chromiumShapash is a Python library which aims to make machine learning interpretable and understandable by everyone. It provides several types of visualization that display explicit labels that everyone can understand. Data Scientists can understand their models easily and share their results. End users can understand the decision proposed by a model using a summary of the most influential criteria.
machine-learning transparency lime interpretability ethical-artificial-intelligence explainable-ml shap explainabilityEver Gauzy - Open-Source Business Management Platform for On-Demand and Sharing Economies. The platform includes Enterprise Resource Planning (ERP) software, Customer Relationship Management (CRM) software and Human Resource Management (HRM) software with employee Time and Activity Tracking functionality.
erp invoices accounting crm invoicing hr fintech time-tracker payroll billing budget expenses transparency team-management salary time-tracking bookkeeping hrmGlobaLeaks is open-source / free software intended to enable secure and anonymous whistleblowing initiatives developed by the Hermes Center for Transparency and Digital Human Rights. For the user manual refer to the GlobaLeaks's User Manual.
digital-human-rights transparency whistleblowing anonymity security angularjs twistedYour HTML5 video source is re-rendered into a canvas-element, adding the possibility to use transparencies in your video. Alpha information is either included in the video's source file (moving) or in a seperate <img>-element (static). The package also ships with a simple CLI tool for automatically converting your RGBA video sources into the correct format.
html5-video canvas-element alpha-channel html5 video transparency keying canvas jquery-pluginThis list focuses on (mostly open-source) projects that are directly related to humane tech topics. Please read the guidelines for contributing before sending PR's. This section is for inactive projects that are nonetheless still of interest regarding their topics or code.
ethics humane humane-tech transparency privacy privacy-first facts-first freedom-of-information freedom-of-speech health ergonomics democracy decentralization society social-media gamification mindfulness social awesome awesome-listA curated, but probably biased and incomplete, list of awesome machine learning interpretability resources. If you want to contribute to this list (and please do!) read over the contribution guidelines, send a pull request, or contact me @jpatrickhall.
fairness xai interpretability iml fatml accountability transparency machine-learning data-science data-mining r awesome awesome-list machine-learning-interpretability interpretable-machine-learning interpretable-ml interpretable-ai interpretable-deep-learning explainable-mlSmartSystemMenu extends system menu of all windows in the system.
system-menu custom items transparency resize screenshot handle menusHTML5 video with alpha channel transparencies
jquery-plugin html5 video transparency alpha-channel keying canvasIf you're looking to set up a new Parliamentary monitoring site then you should look at our Pombola probject at https://www.mysociety.org/international/pombola/ which takes the lessons we've learned from writing and running TheyWorkForYou and uses them to create a modern, flexible and more easily adaptable platform for creating your own Parliamentary monitoring site. We strongly encourage people to use this rather than trying to adapt TheyWorkForYou to their own requirements.If you want to dig in to the source of TheyWorkForYou then carry right on below.
civic-tech transparency parliamentary-monitoring politics politicians uk parliament civictech mysocietyThis repository contains example applications built on top of Trillian, showing that it's possible to apply Transparency concepts to problems other than Certificates. These examples are not supported per-se, but the Trillian team will likely try to help where possible. You can contact them via the channels listed under Support on the Trillian repo.
transparency trillian examplesMakes an image's background pixels transparent. Replaces img with canvas. Use the flood-fill method for making the background transparent.
transparency image canvas pixels translucify replace domDjango Public Project (DPP) is a custom CMS for making large public projects, political processes and enquiry commissions more transparent.
django transparency open-data public-project accountability berwatch political politicsView the project on Google Summer of Code website. You can see the detailed timeline here.
open-source transparency rdf-schema blockchainMachine learning algorithms create potentially more accurate models than linear models, but any increase in accuracy over more traditional, better-understood, and more easily explainable techniques is not practical for those who must explain their models to regulators or customers. For many decades, the models created by machine learning algorithms were generally taken to be black-boxes. However, a recent flurry of research has introduced credible techniques for interpreting complex, machine-learned models. Materials presented here illustrate applications or adaptations of these techniques for practicing data scientists. Want to contribute your own examples? Just make a pull request.
machine-learning jupyter-notebooks interpretability data-science data-mining h2o mli xai fatml transparency accountability fairness xgboostRecent advances enable practitioners to break open machine learning’s “black box”. From machine learning algorithms guiding analytical tests in drug manufacture, to predictive models recommending courses of treatment, to sophisticated software that can read images better than doctors, machine learning has promised a new world of healthcare where algorithms can assist, or even outperform, professionals in consistency and accuracy, saving money and avoiding potentially life-threatening mistakes. But what if your doctor told you that you were sick but could not tell you why? Imagine a hospital that hospitalized and discharged patients but was unable to provide specific justification for these decisions. For decades, this was a roadblock for the adoption of machine learning algorithms in healthcare: they could make data-driven decisions that helped practitioners, payers, and patients, but they couldn’t tell users why those decisions were made.
xgboost healthcare interpretability xai iml transparency machine-learning data-science data-mining machine-learning-interpretability interpretable-ml interpretable-machine-learning explainable-mlMonotonicity constraints can turn opaque, complex models into transparent, and potentially regulator-approved models, by ensuring predictions only increase or only decrease for any change in a given input variable. In this notebook, I will demonstrate how to use monotonicity constraints in the popular open source gradient boosting package XGBoost to train a simple, accurate, nonlinear classifier on the UCI credit card default data. Once we have trained a monotonic XGBoost model, we will use partial dependence plots and individual conditional expectation (ICE) plots to investigate the internal mechanisms of the model and to verify its monotonic behavior. Partial dependence plots show us the way machine-learned response functions change based on the values of one or two input variables of interest, while averaging out the effects of all other input variables. ICE plots can be used to create more localized descriptions of model predictions, and ICE plots pair nicely with partial dependence plots. An example of generating regulator mandated reason codes from high fidelity Shapley explanations for any model prediction is also presented. The combination of monotonic XGBoost, partial dependence, ICE, and Shapley explanations is likely the most direct way to create an interpretable machine learning model today.
machine-learning fatml xai gradient-boosting-machine decision-tree data-science fairness interpretable-machine-learning interpretability machine-learning-interpretability iml accountability transparency data-mining interpretable-ml interpretable interpretable-ai lime h2oA log of all DMCA takedown requests and their outcome, removed file metadata, and our canary. All dates in this repository are in the format MMM DD, YYYY [HH:mm] or YYYY-MM-DD [HH:mm].
whats-this canary transparency signed
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.