Yolo

raspberry

 * https://github.com/shizukachan/darknet-nnpack  1fps
 * https://github.com/DT42/BerryNet 1 fps Yolo on Raspberry pi. Movidius neural compute stick frame rate is 30.

People tracking
moved to Deep sort

tutorials
Pyimagesearch.com

youtube
https://www.youtube.com/watch?v=Y73SWT79Rck for Pytorch install conda install tensorflow-gpu

https://www.youtube.com/watch?v=YmMZkCstui0&list=PL_Nji0JOuXg1gNDFvJ8xU3dAFw27Oqz3J playlist by Augmentedreality uploader

Movidius compute stick
movidius compute stick

AlexeyAB (preferred fork)
https://groups.google.com/forum/#!msg/darknet/8qC4k_cWgOc/TDxjY34ZBQAJ To save detection results into the txt file of objects from images, you can compile with LIBSO=1 and then do: ./uselib air.txt > result.txt Where air.txt contains paths to images. So result.txt will contain detection coords. To save detection results of objects from videofile, you can compile with LIBSO=1 and then do: ./uselib test.mp4 > result.txt before this, uncomment this line and recompile - but this slightly reduces FPS: https://github.com/AlexeyAB/darknet/blob/548a0bc652b562723695cc107f0844f11d1a2207/src/yolo_console_dll.cpp#L169 Also read: https://github.com/AlexeyAB/darknet/issues/125#issuecomment-320373088

пятница, 29 сентября 2017 г., 12:05:26 UTC+3 пользователь alex.ange...@gmail.com написал: https://github.com/pjreddie/darknet/issues/723 Run YoloV3 detections on thousands of images and save outputs?

./darknet detector test ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights -dont_show < data/train.txt > result.txt

https://github.com/AlexeyAB/darknet#how-to-train-pascal-voc-data Fork of Yolo, download android webcam app. use android phone as network camera input stream.
 * https://github.com/AlexeyAB/Yolo_mark GUI for marking bounded boxes of objects in images for training Yolo v2
 * Multi gpu training instructions.
 * https://github.com/pjreddie/darknet/pull/861 read directly from memory.
 * https://github.com/AlexeyAB/darknet/issues/407 Cuda patch.
 * https://timebutt.github.io/static/how-to-train-yolov2-to-detect-custom-objects/ sites https://pjreddie.com/darknet/yolo/ training data set.  from Nils Tijtgat. YOLOv2 is known to struggle when detecting small objects. The Darknet Google Groups has many different topics on how you could improve performance, you could have a look there to find inspiration. A suggestion that is often repeated is to train YOLOv2 using a higher input resolution, instead of 416x416. See this or this for instance. |sort:relevance/darknet/MumMJ2D8H9Y/n6nAIM0EAgAJ yolo small google groups1, yolo small 2 google groups
 * https://timebutt.github.io/static/understanding-yolov2-training-output/
 * |sort:relevance/darknet/BcsBQ-rez9Q/_BlMlIUSAQAJ sept2017 version is 15fps on tx1, use alexeyab as it gives 200fps. Also pjreddie reorganized his code substantially changing folders.
 * counting number of objects in image, people tracking, https://www.youtube.com/watch?v=QeWl0h3kQ24
 * fix for 4k video fps from 7 to 20
 * run demo without screen when using AMZ gpu's for example.
 * resize_image
 * small object detection
 * https://groups.google.com/forum/#!topic/darknet/EjQGffa7y-k  multi gpu training. See Nvidia cuda blade install for cuda install on linux..
 * Yolo bounding box

Notable forks
https://github.com/explosion/lightnet from https://prodi.gy/

https://github.com/dannyblueliu/YOLO-Face-detection and download https://www.dropbox.com/s/bih69gvt7g0soxo/yolo-face_final.weights?dl=0

https://github.com/xhuvom/darknetFaceID

https://github.com/bendidi/Tracking-with-darkflow

https://github.com/oarriaga/face_classification Gender and face detection.

https://github.com/RiccardoGrin/darknet Though there are many image datasets/databases online, I could not find the images which I wanted, or these were part of a very large set, or the download was simply too large. Therefore, I just used my phone to take photos. However the smallest photos I could take were 3264\*1836, and their names were not as desired. From research, apparently at least 250 different images are needed for each class. Taking 250 photos can take some time and creativity, therefore I took only half, and did some image augmentation (flipping, rotating, etc...) to get all 250 images. NOTE: Much better results will be achieved by get the 250 images or more, without applying any augmentation, as there will be more difference between the images. Thus image augmentation should only really be used to increase the set, to further improve the classification accuracy, though it will not be as large an increase as using original iamges.

training
Darknet detector train Data/voc.data yolo.cfg darknet19_448.conv.23   from darknet groups training command

I'm assuming you've successfully created a train.txt file? (this is the file full of all of your filepaths to your dataset, and it's creation is detailed on the YOLO homepage). So, if you've got that created, it's probably not in your /data/voc/ directory; it's most likely in the directory one level up from where you have your images and labels stored. In yolo.c you need to specify where that file is located (you can use an absolute path here) so go to where you have train.txt and enter the pwd command (for print working directory), copy that absolute filepath into your yolo.c file on the 18th line (replace what is there), and then do "make clean" and "make" in your darknet directory. from training paul mcelroy https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects

Can I reduce number of convolution layers and fully connected layers in yolo.cfg file As long as downsampling factor stays 32, you can do anything you want. As you can see, network taking 416x416 image and downsampling it until 13x13. So downsampling factor is 32 (416/13). Changing number of convolution filters does not affect for downsampling factor because downsampling is connected to the spatial size where number-of-conv-filters works with the depth of the tensor. However, if you remove one of the conv layers then downsampling factor will change from 32. If you have single class, I would recommend decresing number of "last second" (https://github.com/Jumabek/darknet/blob/master/cfg/yolo-voc.cfg#L217) and "last third" (https://github.com/Jumabek/darknet/blob/master/cfg/yolo-voc.cfg#L200) convolutional filters from 1024,1024 to 256,512. Also, make sure you use anchors that are special to people images. This scripts might be helpful for computing anchors. https://github.com/Jumabek/darknet_scripts

[Calculating Anchors region kmeans clustering on training data width and height. the anchors are used similar to anchor boxes, yolov2 predicts offsets to these widths and heights (however it predicts the x/y coordinates in the same way as yolo v1). Please note, anchros are generated by K-means algorithm where author clustered all the VOC box size and ratio to 5 groups. So 16,10 is one of the clusters from those 5. I will probably make a tutorial about anchors this weekend, stay tuned([[Jumabek Alikhanov]])

Reinspect annotations into YOLO annotations for detection

make file

 * gpu settins arch in make file
 * http://www.pradeepadiga.me/blog/2017/03/22/installing-cuda-toolkit-8-0-on-ubuntu-16-04/ installing cuda

multiple gpu
https://groups.google.com/forum/#!topic/darknet/NbJqonJBTSY train on four gpu's at the same time.

node js
https://github.com/moovel/node-yolo, https://lab.moovel.com/blog/what-you-get-is-what-you-see-nodejs-yolo Teaching your computer how to see just got easier with node-yolo. Created as a collaboration between the moovel lab and Alex (@OrKoN of moovel engineering), node-yolo builds upon Joseph Redmon’s neural network framework and wraps up the You Only Look Once (YOLO) real-time object detection library - YOLO - into a convenient and web-ready node.js module. The best thing about it: it’s open source!

yolo swift
http://machinethink.net/blog/object-detection-with-yolo/

bounding box
Yolo bounding box

Python wrapper

 * https://github.com/thomaspark-pkj/pyyolo outputs bounding box to text file. use opencv2.4 due to waitkey issue and not 3.3.0
 * https://github.com/lucaswamser/darknet, https://github.com/pjreddie/darknet/pull/111

tensorflow port
https://github.com/thtrieu/darkflow Download weights here google drive and pjreddie weights.

https://dzone.com/articles/implement-object-recognition-on-live-stream

pjreddie author

 * https://github.com/pjreddie/TopDeepLearning Various projects on deep learning neural nets.
 * https://groups.google.com/forum/#!forum/darknet  forum

Jumabek
https://github.com/Jumabek/darknet_scripts, anchors in region layer(google groups)
 * how to use yolo weights

darknetfanz
train yolo coco data The first time I made a custom dataset that ran the 'demo' argument I changed yolo.c line 13 "char *voc_names[]=..." to reflect my custom classes. The second time I made a custom dataset, I added an argument to darknet.c "-override_vocnames" that loaded the appropriate "names=" file from the data file. ie - coco.data
 * Maybe not the best way to do it. But it was easy to implement.

thtrieu
https://github.com/thtrieu/darkflow json output can be generated with descriptions of the pixel location of each bounding box and the pixel location. Each prediction is stored in the sample_img/out folder by default. An example json array is shown below.

Sai

 * https://github.com/saiprabhakar/darknet-modified/tree/v0 Outputs image labels and bounding box to text file. When a person walking down the street veers unto the driveway, his position changes triggering an alert. https://groups.google.com/forum/#!topic/darknet/ylEWe3JUKrE
 * https://github.com/saiprabhakar/Scene-recognition subscene analysis
 * https://github.com/saiprabhakar/DeepDriving Deep driving. See Jabelone

Guanghan

 * https://github.com/Guanghan/darknet This fork repository adds some additional niche in addition to the current darkenet from pjreddie. e.g. (1). Read a video file, process it, and output a video with boundingboxes.
 * http://guanghan.info/blog/en/my-works/train-yolo/ and his SSD detector
 * https://groups.google.com/forum/#!topic/darknet/cxTAbP-um7Y  ,
 * https://github.com/puzzledqs/BBox-Label-Tool ,
 * https://github.com/Guanghan/darknet/blob/master/scripts/convert.py

I am wondering the answer of original question. Can we get coordinates and count of detected objects, as text output, in darknet?

yes you can, go to in folder src/image.c find draw_detection function, left,right,top,bot is image bounding box, names[class] is object name, you can save bounding box and object in txt and count the object

http://guanghan.info/projects/ROLO/ Rolo a fork of Yolo does realtime tracking and identification of the body parts of a human such as face, allowing the Tracked vehicle robot's PepperBall gun accurate engagement. https://github.com/Guanghan/ROLO.

Guozhongluo
https://github.com/guozhongluo/YOLO Only needs Opencv and not Caffe berkeley vision

Yolo python wrapper
https://github.com/IvonaTau/Python-wrapper-for-YOLO, https://groups.google.com/forum/#!topic/darknet/f-TICXNR1_E

https://github.com/thomaspark-pkj/pyyolo from python wrapper

https://pjreddie.com/darknet/

https://pjreddie.com/darknet/yolo/

ivona
https://github.com/IvonaTau/Python-wrapper-for-YOLO

https://groups.google.com/forum/#!topic/darknet/f-TICXNR1_E

Sakmann
https://medium.com/@ksakmann/vehicle-detection-and-tracking-using-hog-features-svm-vs-yolo-73e1ccb35866 from Sakmann
 * http://www.pyimagesearch.com/2014/11/10/histogram-oriented-gradients-object-detection/

face tracking
https://www.youtube.com/watch?v=UsOi1BfunnU https://github.com/xhuvom/darknetFaceID  i] To detect face from live camera feed and annotate automatically, use the .cfg and .weight files from QuanHua (https://mega.nz/#F!GRV1XKbJ!v8BCsFO8iJVNppiGXY4qMw). [ii] Only add those lines on src/image.c file of this fork as described bellow:

(line #223) to save .jpg images and (line #227) to save annotations on separate folders for each class (also change class number on line #229

[iii] After modifications, run the detector from live webcam or video file which specifically shows only one particular persons face. [iv] Repeat the process for every persons you want to recognize and modify training data location and class number accordingly. About ~2k face images per person is enough to recognize individual faces but to improve accuracy, more data could be added.

traffic
https://github.com/karolmajek/darknet, https://www.youtube.com/watch?v=yQwfDxBMtXg
 * https://www.youtube.com/watch?v=DeCFxPQlOVk indian traffic data, https://github.com/ctmackay/darknet  , Track 1 utilized the Darknet framework with Yolo object detection. We achived 2nd place in mean average precision for the AI city challenge using this network and training parameters. You will need to build darknet in order to train and run inference on the models. i need to contact nvidia representative, they own the rights to the dataset, I may not have permission to release the models. I am meeting with them on the 6th, i will get back to you.

c++ wrapper
https://groups.google.com/forum/#!topic/darknet/oxAi9DjxTcM Check src/yolo.c for the various input args and how each of them are handled. You could extend the test_yolo function to run detection on multiple images: void test_yolo(char *cfgfile, char *weightfile, char *filename, float thresh)
 * https://github.com/for-aiur/yolo_cpp

opencl
https://github.com/myestro/darknet

links
Uses a TitanX GPU($600) with Yolo to identify objects, draw bounding box and pass the coordinates to say thirty separate Tracked vehicle bots with cost effective CPU running OpenTLD. Ideal solution is to implement yolo on FpGa.
 * https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/single-shot-detectors/yolo.html tutorial
 * Yolo scripts, Yolo training
 * Yolo Steve Puttemans
 * Yolo Mark Jay
 * Yolo compile, Yolo alexeyAB
 * Yolo opencl works on raspberry as well.
 * two minute papers
 * Stolen cars RSA
 * SORT tracking
 * AI datasets
 * Yolo voletiv