Tensorflow

Caffe to tensorflow
convert Caffe berkeley vision models to tensorflow, which allows easier python based neural networks. https://github.com/ethereon/caffe-tensorflow, http://www.cs.toronto.edu/~guerzhoy/tf_alexnet/  linked from David Silver http://medium.com Three front ends available for tensorflow
 * https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim tensorflow slim
 * https://keras.io/
 * http://tflearn.org/ David Silver's preferred solution

resources
https://github.com/jtoy/awesome-tensorflow A curated list of  TensorFlow experiments, libraries, and projects.

Yolo

 * https://github.com/pjreddie/TopDeepLearning Various projects on deep learning neural nets.
 * http://guanghan.info/projects/ROLO/ Rolo a fork of Yolo does realtime tracking and identification of the body parts of a human such as face, allowing the Tracked vehicle robot's PepperBall gun accurate engagement.
 * https://github.com/thtrieu/darkflow

https://pjreddie.com/darknet/

https://pjreddie.com/darknet/yolo/

https://medium.com/@ksakmann/vehicle-detection-and-tracking-using-hog-features-svm-vs-yolo-73e1ccb35866

https://github.com/xslittlegrass/CarND-Vehicle-Detection Detecting vehicles in a video stream is an object detection problem. An object detection problem can be approached as either a classification problem or a regression problem. As a classification problem, the image are divided into small patches, each of which will be run through a classifier to determine whether there are objects in the patch. Then the bounding boxes will be assigned to locate around patches that are classified with high probability of present of an object. In the regression approach, the whole image will be run through a convolutional neural network to directly generate one or more bounding boxes for objects in the images.

https://github.com/allanzelener/YAD2K You only look once, but you reimplement neural nets over and over again. YAD2K is a 90% Keras/10% Tensorflow implementation of YOLO_v2. Original paper: YOLO9000: Better, Faster, Stronger by Joseph Redmond and Ali Farhadi. https://arxiv.org/abs/1612.08242

https://github.com/sunshineatnoon/Darknet.keras/ ,  http://www.robots.ox.ac.uk/~joao/ ,  https://github.com/vojirt/kcf Kernelized correlation filters. http://www.robots.ox.ac.uk/~joao/circulant/ Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code. To encourage further developments, our tracking framework was made open-source. http://rodrigob.github.io/#code is the python port, but C++ is also available. https://github.com/rodrigob/barinova_pedestrians_detection  This a linux port of the original code provided by Olga Barinova from the Vision Group at Moscow State University, 2010. Please visit the project website for more details. This derivative work follows the Microsoft Research Shared Source license, which allows only non-comercial usage.(meaning commercial companies will have to pay millions to use it in a commercial product). The FSF Stallman newspeak(GPL and BSD) on GPL doesn't make it clear that the copyright holders can arbitrarily wave this non commercial restriction if you pay them lots of money. Pedestrians detection using Hough forests is a derivative work from http://graphics.cs.msu.ru/en/science/research/machinelearning/hough. o detect multiple objects of interest, the methods based on Hough transform use non-maxima supression or mode seeking in order to locate and to distinguish peaks in Hough images. Such postprocessing requires tuning of extra parameters and is often fragile, especially when objects of interest tend to be closely located. In the paper, we develop a new probabilistic framework that is in many ways related to Hough transform, sharing its simplicity and wide applicability. At the same time, the framework bypasses the problem of multiple peaks identification in Hough images, and permits detection of multiple objects without invoking nonmaximum suppression heuristics. As a result, the experiments demonstrate a significant improvement in detection accuracy both for the classical task of straight line detection and for a more modern category-level (pedestrian) detection problem.
 * https://github.com/rodrigob/circulant_matrix_tracker Circulant matrix tracker

SSD
https://www.youtube.com/watch?v=6q-DBCPROA8 single shot multibox detector. Provides comparable accuracy to explicit region proposal methods (such as Faster R-CNN) but is much faster and thus better suited for real-time applications.
 * https://github.com/weiliu89/caffe/tree/ssd

paper
http://cs229.stanford.edu/proj2016/report/BuhlerLambertVilim-CS229FinalProjectReport.pdf We reimplement YOLO, a fast, accurate object detector, in TensorFlow. To perform inference, we leverage weights that were trained for over one week on GPUs using Ima- geNet data, a publicly-available dataset containing several million natural images. We demonstrate the ability to repro- duce detections comparable with the original implementa- tion. We learn the parameters of the network and compare mean average precision computed from pre-trained network parameters. Furthermore, we  propose  a  post-processing scheme to  perform  real-time  object  tracking  in  live  video.

Redmon et  al. ’s   work   is   especially   notable   for   two major strengths. First, their model solves in an end-to-end fashion what was considered in the not-far-distant past two separate problems  in  computer  vision  literature:   object detection and  object  classification. Second, their  model presents  an  efficient  solution  to  an  enduring  problem  in computer  vision: how does  one  go  about  producing  an arbitrary  number  of  detections  in  an  image  while  using fixed dimensional  input,   output,   and  labels? YOLO avoids computationally  expensive  region  proposal  steps that detectors  like  Fast  R-CNN[4]  and  Faster-RCNN[14] require. However, since the time of YOLO’s publication, newer models  such  as  Single-Shot  Multi-Box  Detectors [9] seem to offer improvement in mAP with reduced GPU inference time  [6]. YOLO uses  grid  cells  as anchors to detections, much like Faster R-CNN and Multi-Box.

links
http://rimstar.org/science_electronics_projects/backpropagation_neural_network_software_3_layer.htm