Add some Depth to your fragments
android fragment depth 3d material design animation transition translation scaleThis is a fast and robust algorithm to segment point clouds taken with Velodyne sensor into objects. It works with all available Velodyne sensors, i.e. 16, 32 and 64 beam ones. I recommend using a virtual environment in your catkin workspace (<catkin_ws> in this readme) and will assume that you have it set up throughout this readme. Please update your commands accordingly if needed. I will be using pipenv that you can install with pip.
fast real-time clustering point-cloud range ros lidar depth segmentation pcl catkin velodyne-sensor velodyne depth-image range-image depth-clusteringDepth Lab is a set of ARCore Depth API samples that provides assets using depth for advanced geometry-aware features in AR interaction and rendering. Some of these features have been used in this Depth API overview video. ARCore Depth API is enabled on a subset of ARCore-certified Android devices. iOS devices (iPhone, iPad) are not supported. Find the list of devices with Depth API support (marked with Supports Depth API) here: https://developers.google.com/ar/devices. See the ARCore developer documentation for more information.
mobile ar depth interaction arcore arcore-unity depth-api depthlabDemo app that shows some of the OpenNI capabilities with the Kinect Hardware. This app was developed so that anyone can use it as a base framework for OpenNI Kinect development. Eight projects, one VS2010 solution. Each Production node has it's independent source project, an...
bruno-pires depth hack skeletonA command-line tool to extract high-res spherical images and other data from Google StreetView.This tool uses Electron for proper interaction with the Google API, so it is fairly heavy (~100mb). Install with latest npm.
depth street view extract glsl lat long latitude longitude streetview googleIt is impossible to parse 1gb JSON file with native node.js primitives.Just a streaming parser, nothing really fancy. The only difference between this parser and the others is that it can skip data that is nested too deeply.
json depth stream auxilliary dataA real-time JavaScript maze generator using the depth-first search algorithm.
maze generator maze-generator algorithm depth-first-search stack canvas html5 generation depth-first depth first randomReTouch is an OpenGL application that enables editing and retouching of images using depth-maps in 2.5D. The depth maps are generated by Volume, a state of the art tool, that uses a CNN (Convolutional Neural Network) to predict depth-maps from 2D images . ReTouch uses these depth-maps to enable the addition of depth of field and color retouching for the foreground and background separately.
opengl machine-learning depth depth-map editor graphics image-processingThe paper shadows mixin-library for CSS pre-processors. Depth shadow.
google material design material-design google-material-design shadows shadow depth less sass stylusgoleft is a collection of bioinformatics tools written in go distributed together as a single binary under a liberal (MIT) license. Running the binary goleft will give a list of subcommands with a short description. Running any subcommand without arguments will give a full help for that command.
genomics bioinformatics coverage depthfast BAM/CRAM depth calculation for WGS, exome, or targeted sequencing. when appropriate, the output files are bgzipped and indexed for ease of use.
coverage genome depth exome wgs sequencing nim nim-langThis repository (https://github.com/twhui/MSG-Net) is the offical release of MSG-Net for our paper Depth Map Super-Resolution by Deep Multi-Scale Guidance in ECCV16. It comes with four trained networks (x2, x4, x8, and x16), one hole-filled RGBD training set, and three hole-filled RGBD testing sets (A, B, and C). To the best of our knowledge, MSG-Net is the FIRST convolution neural networkwhich attempts to upsample depth images under multi-scale guidance from the corresponding high-resolution RGB images.
msg-net super-resolution depth caffe cnn eccv16Working with super deep json objects from the terminal is a pain, unless you use a good json parser. jq is an awesome one, but doesn't handle object depths, afaik. Here the idea is to walk through a json object as you would read a summary: level by level.
json cli parser depth ndjson parse inspectA modular api is provided where effects can act as both input and output for other effects. Effect shader chunks and uniforms are fused together, as possible, in uber shaders for performance. The effect fusion mechanism allows efficient setups of high complexity to be implemented effortlessly in declarative fashion. The framework is also VR Ready. Mechanisms are provided to deal with the issues stemming from the stereo rendering setup required, and all core effects utilize them to ensure proper post processing operations in VR.
vr webvr aframe webgl shaders post-processing threejs grading postprocessing effect bloom outline sobel freichen edge colors ssao ambient depth occlusion fxaa antialiasing godrays sunshafts three uber shader fusion grain film fuse tiled forward lighting forward+Dead-simple defense against unbounded GraphQL queries. Limit the complexity of the queries solely by their depth. Suppose you have an Album type that has a list of Songs.
graphql security nodejs complexity query depth limitSkipping heading ranks can be confusing and should be avoided where possible: Make sure that a <h2> is not followed directly by an <h4>, for example. So an accessible app must have heading levels like this...
react accessibility a11y wcag aria aria-level heading level depth h1 h2
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects.