# 2017

## My favourite

• Opencv tutorial people detection
• Opencv tutorial optical flow
• Opencv Video stabilization
• Opencv videowriter
• Opencv FFMPEG
• Opencv Canny edge and hough lines
• ## VideoCapture IP camera stream, Web camera, File, images and VideoWriter

Opencv reading video files, reading video stream, Images, IP and Web cameras. I would like to cover this all in one post. Yes, video writer is also important to store your results and achievements in video. There is couple of simple trick and if you follow them, you will never have a problem with the reading and writing video, stream, files in future.

### Basic opencv web camera reading

There is couple think you need to take care. My favorite installation on windows platform is trough NUGET package system. It is easy in few steps. I describe this many times for example VS 2017 here. Nuget set up your project without any linking settings, library path selection, global environmental variables and you can directly start coding in few seconds. Just select and install nuget and compile code below. Nothing else.  You need to take care if you have included several thinks. highgui.hpp core.hpp, imgproc.hpp, videoio, imgcodecs. All of them are not necessary to read the web camera but for example for video stream from IP camera is possible that you really need them all.

#### VideoCapture web camera code

VideoCapture cap(0); is mean open the default camera web camera. Most of the time this mean web camera on your laptop or plugged in any USB camera. The video is read in 'never ending for(;;) loop' which is break when the video from camera is not available by condition if (!cap.isOpened()). Finally the Mat img;   cap >> img;  copy image from default camera devices into your MAT container. The rest is just display.

#include "opencv2\highgui.hpp"
#include "opencv2\imgproc.hpp"
#include "opencv2\objdetect\objdetect.hpp"
#include "opencv2\videoio\videoio.hpp"
#include "opencv2\imgcodecs\imgcodecs.hpp"
#include "opencv2\core\core.hpp"
#include <vector>
#include <stdio.h>
#include <windows.h>
#include <iostream>
#include <time.h>

using namespace cv;
using namespace std;

int main(int argc, const char** argv)
{

VideoCapture cap(0);
for (;;)
{

if (!cap.isOpened()) {

cout << "Video Capture Fail" << endl;

break;
}
else {
Mat img;
cap >> img;
namedWindow("Video", WINDOW_AUTOSIZE);
imshow("Video", img);
int key2 = waitKey(20);
}

}

return 0;
}



Look at the example above for reading the camera. There is almost no difference. Just one, small and straightforward. As a parameter of cap put instead of default devices cap(0) the file name or path you want to open. There is almost always trouble with path.  In this example you just read the files that are located under your project. You can also read the file from different location or on one place by using the full path into some video folder as you can see in following examples.

### Opencv Image read from file and writing

This is super easy task. Into the our Mat container  image load the image 6.jpg on this C:/adress/ path. There is something different what is great to have in case you are reading lots of images inside the folder.

Mat image;

CV_LOAD_IMAGE_COLOR is defined parameter to tell reader that i want MAT with 3 colors. Basically 3 MAT array container of image size. One for blue, red and green color channel. CV_LOAD_IMAGE_GRAYSCALE is defined to tall reader that i want gray scale. Basically only one Mat of the weight(cols) and high(rows) of the image..

To write results into file just use imwrite where the first string is just name of your result and image. Image is MAT containing what you want to save..

imwrite("image.jpg", image);

### Opencv video stream verification

I am using good practice. instead of try stream directly in opencv. I prefer to verify my stream in VLC player. It is faster than modify code and compile again of passing the camera URL as parameter. Also the VLC ask for potential user name and password if its necessary. What is annoying is that all the cameras own stream URL format.  The best approach is to find your IP camera model on http://www.ispyconnect.com and apply to verify inside the VLC. After verification put this directly to VideoCapture cap("http://IP:PORT/mjpeg/video.mjpg?counter");

http://IP:PORT/mjpeg/video.mjpg?counter
rtsp://IP:PORT/various url
rtsp://IP:PORT/axis-cgi/mjpg/video.cgi
http://IP:PORT/mjpg/video.mjpg
("rtsp://USER:PASS@xxx.xxx.xxx.xxx/axis-media/media.amp?camera=2")
Important FFMPEG is needed in Linux. In case of Nuget packages depends but the stream sometimes needs special installation.

### Opencv tutorial code IP camera pseudo code

There is 3 function..
First of all, the main function at the end, where are established 2 threads to read the camera stream..

In Main
• Thread call the stream function for both camera with different IP camera URL                       thread cam1(stream, "http://xxxxxxxR");
• To run the function stream inside the thread with url as parametr use.                       cam1.join();
void stream

• Capture video from url strCamera VideoCapture cap(strCamera)
• Fill the frame from cap  cap >> frame;
• Detect people in camera detect(frame, strCamera);
void detect

### Opencv C++ IP camera code


#include <iostream>
#include "opencv2/opencv.hpp"
#include <vector>
using namespace std;
using namespace cv;
void detect(Mat img, String strCamera) {
Mat original;
img.copyTo(original);
vector human;
cvtColor(img, img, CV_BGR2GRAY);
equalizeHist(img, img);
detectorBody.detectMultiScale(img, human, 1.1, 2, 0 | 1, Size(40, 80), Size(400,480 ));
if (human.size() > 0)
{
for (int gg = 0; gg < human.size(); gg++)
{
rectangle(original, human[gg].tl(), human[gg].br(), Scalar(0, 0, 255), 2, 8, 0);
}
}
imshow("Detect " + strCamera, original);
int key6 = waitKey(40);
//End of the detect
}
void stream(String strCamera) {
VideoCapture cap(strCamera);
if (cap.isOpened()) {
while (true) {
Mat frame;
cap >> frame;
resize(frame, frame, Size(640, 480));
detect(frame, strCamera);
}
}
}

int main() {
cam1.join();
cam2.join();
return 0;
}


### write video into file

On windows machine i usually works with simple wmv format. Works perfectly. Remember the golden rule of video writer in opencv. image Mat have to match same size as VideoWriter. The image is mat that i want to write as frame into the video.. Before I put them into the VideoWriter, I always resize to target size.. This causing the lots of trouble. You cannot see the video result only for that reason.
Size SizeOfVideo = cv::Size(1024, 740);  VideoWriter video("Result.wmv", CV_FOURCC('W', 'M', 'V', '2'), CAP_PROP_FPS,SizeOfVideo, true);
resize(image, image, Size(800, 600));
video << image;
OR
video.write(image);

## Microsoft and cognitive service computer vision is one of the most visible on build 2017

Looks pretty cool, Microsoft machine learning for safety in work environment. Recognizes people identity, real time tracking, evaluate their role in respect with position  against to safety. We will see in near future this technology more often. Lets have a look to strategy little bit closer. All who play with computer vision just know, that the demonstration is one think and the general service available to the Employers is something totally different. Cool, interesting as amazon GO. Lets compare with Amazon Go much closer.

### Microsoft cognitive vs Amazon Go

Both are huge co-leaders on the market with cloud computing. Obviously, Who have this computational power should entered the future of everything, starting from small IOT devices to large scale distributed intelligent platforms driven by machine learning. There is maybe question, why microsoft do not follow the Amazon Go concept in general form, open to everybody. Analysis of video stream in retail statistics and marketing. I know it is conference demonstration, mentioned everywhere. Maybe I can add some original idea, why to slightly change the focus.

### Employees monitoring system

This is related to mentality, human resources and people comfort in working environment. Sure as a employer you can control how effectively your money are spend covered by all by safety of the employees buzzwords. In some environment with strict rules question of life or serious health problems is necessary to go this way. Hard stop and sensors are already part of this kind of environments. Can we expect that this environments send video to be analysed somewhere as a service? Is this really the best market? Is this what everybody want. Is this dangerous or provide benefit with safety or making just people leaving the companies that push privacy questions so hard behind the borders.

Do you know details about the architecture ? Let me comment.. As a service to process video streaming in cloud, There is several problem and much more critical one, when we are talking about heath monitoring of anyone.

### Amazon go do right think

Advantage of the Amazon go concept is more features. They are count not only to computer vision but also to sensor fusion from different sources to provide better features processed by deep learning. Main advantage against Microsoft is that Amazon focus on their own environment but Microsoft to general one. Problem is general one not in retail but in environment, where the human health is concerned. This should be critical problem. Where Amazon has same situation in same environment and lights conditions handled hundred times. Microsoft should adapt on every possible light, environment and other variables related to deployment in different places. What is worse in much more critical applications and situations.

This could be hard and slow down to go with this on market..

### Microsoft go to harder segment for computer vision

Again, Just a comparison Microsoft concept and situation of and Amazon go. Why the microsoft position is just a little bit uncomfortable.

Amazon go can easily solve customers problem, refund the money for customers complains and the application is  basically not so critical. In the other hand. Microsoft need in such a cases like hospital and security critical environment count whit certification problems, law issues and more. In Microsoft segment is much more responsibility. This is why amazon can speed up the development based on experience in real application instead of segment when is necessary be perfect.
On the other hands, AI is able control cars itself without human. This segment is also little bit risky from that point of view.

I really think that they should start competition with amazon Go as general service for all retails. Somehow bound the requirements for the stores environment and use sensor fusion. For example in medical application like hospital is maybe good idea to use thermo cameras combined which normal one..

Provide benefits for customer and high valuable information about the customers behavior to boost the retail and advertisement impact. This is just a save. No problem issue.

### Video stream delivery

I think that microsoft mostly provide all AI in form of service. You deliver stream or image request and We give you a results back. The limitation here is bandwidth, Video stream quality and time delays. What if the network go down, resolution or frame rate and something happened. Cars has brain in their body.. This service has brain somewhere else. I expect, do not know for sure.
Microsoft can guaranty availability of the service and accuracy, but who will be responsible in case that the system is not able to connect and something happened. Just a case. You always need to count with the worse one and hope that never happened.

Delay, Video stream delays of real time broad casting. Delay is here. Delay in video will be here also in future. When some situation occurred the second play roles in microsoft case. If the alert response come back with delay. This service will be replace by something else.   Maybe is better to deploy something like in form of IoT devices than services. Maybe this kind of service should provide pretrained parameters of deep neural network for IoT devices, compute only forward pass without learning. But not transfer, analyze the whole content and response somewhere..  Who know what will be final solution. Power of cloud, power of machine learning in this case needs to be response available on time. That mean calculation directly in camera devices or on local network.

#### Good luck to Microsoft. Here is a fan. Let me test your solution ! :) . Be careful

Do you like this post ?? Feel free to share. This keep me doing this kind of content. Hopefully I have got time to also some new tutorial post.. Hopefully

## Future of Machine learning in 2017 from the dark side

Machine intelligence is our future. It is almost everywhere in some form right over the web. Machine learning started to be part of the small devices, distributed systems, cars, cameras and many others. Widely connected, distributed and able to do incredible thinks. Every technology has its plus and minus. I will try to focus on one strange minus. So powerful to destroy future of individual peoples, governments and institutions.

### Machine learning generated FAKE news

This will be serious problem in near future. Even now, is a problem realize, what is true and what is not. It is hard to find from the heap of resources the good one and believe what trusted media brings on board. Most probably, you can heard about PewDiePie vs Wall Street Journal. Whole thinks is just obscure. Call the joking YouTuber the racist and so on. Is little bit to much.  Sure everyone have different sense of humor. Difference is also in each countries. To be visible, famous and do not piss anybody in this world is almost impossible. Still this is only internet, this is only humor and it is hard to find true in what is written.

### What people believe

People maybe in near future stop trust to written media at all. What about the others media. Can we expect the same ? We trust to most of the thinks which are visible on the screen and even better with sound. Probably you know following video.. If not. Go on.

Project Page: http://www.graphics.stanford.edu/~nie...

Scary. The future of machine learning is almost like any breaking through technology. Lost of positives. Tons of negatives.

### Trusted media

Behavior of trusted media which fighting for every our advertisement per click rating. Basically money are follow the strange rules. In better cases, They publish almost everything what is caught on camera or audio as a true. In worse example they are just speculating over the pictures. Even worse, brings the fake pictures online.   This could be worse problem than the media stand side by side the owners.

### Fake on real faces

On the video above, I can not recognize the difference between real behavior and acting behavior. We can expect the whole scenes generated fake news. Whole, situation. Recurrent neural network now successfully generate music but they are also generate trustful tone and character of the voice of concrete person.

We have fake behavior in video. We can generate speech and follow the real template of concrete person.

### Challenge for machine learning of 2017

Now we successfully doing interesting staff against us. We need to also find the way how to use this technology to defense us against fake news, fake actors, fake speech. There is the real power to destroy lot of the thinks over us..

## Opencv target tracking example

The computer vision is just super fun. Machine learning with just visible results.. This is so powerful to bring more people intogame. To apply K means clustering to millions of line data-set and obtain 10 clusters. Cool. Sure. Where is the fun_ This is actually super cool. Machine learning, good know of video image properties, optical flow control theory, optimization, feature extraction and others super cool staf.

## Build install Opencv with Contrib, Visual studio 2017

Easy install and build of Opencv 3+ tested on 3.2 version with contributor library and additional features described step by step, picture by picture. After this tutorial you can modify setting of CMAKE project according to HW possibilities and available libraries to build your own Opencv library. Most of the time, Prebuild libs with already generated DLL, LIBS are used to start project and coding. In case, that new visual studio 2017 is available there is no prebuild libraries for VS141, Thich is from my point of view confusing naming of Libraries compatible with Visual Studio 2017.

### Opencv VS 2017 install options

Alternatives to this tutorial. You can skip this.
1. There is possibility use some compatibility pack downloaded to VS140 and use same prebuild library as in case of Visual Studio 2015 this is described here
2. The second way is to try use some prebuild NUGET package. I am using nugets a lot. Simple installation under one line of code inside nuget packages console. here

### Opencv Install and Build in Visual studio 2017 prerequisites

Do not afraid. There is lots of tricky parts for sure. You can miss some prerequisites for sure. You can fail many times. My last compilation has one error to link Python. I do not care. Library, I need are fine.
Choose windows installer Windows (Win32 Installer) cmake-3.4.0-win32-x86.exe
Install CMAKE

2. Install Visual studio 2017 Community, with C++ and C support, maybe also cross platform C++ and C. This is lots of space around 20 gigs per installation. Who knows what is inside. :)

I have created opencv32 folder in c:/.

To this folder i extract from following links the source code Opencv 3.2.0 and opencv_contrib-3.2.0
just unzip here.
Create here one blank folder opencv21build
https://github.com/opencv/opencv/releases
https://github.com/opencv/opencv_contrib/releases

### Configure CMAKE OPENCV project

Cmake configure project for Visual studio 2017 and checking what is available to be build with your own opencv libraries, ffmpeg, opencv, cuda and others.
Add path to opencv-3.2.0, where is the base opencv source code and to the empty folder created by you called build.
Now you need to specify path to Visual studio 2017 compiler for C and C++.  Mine are on the picture.

You can several times hit configure. Until you have some list of possible settings like on image below. Use different settings and configuration options. Just pick up solution you want. FFMPEG, OPENCL support and generate OPENCV.SLN file inside yout opencv32 build folder.

Do not choose averythink but cmake check and configure also only what is possible..

### Opencv Extra Modules contrib libraries

This is great source of modern algorithm to use. CNN, advanced tracking and detection like waldboost. Just in your cmake fill OPENCV_EXTRA_MODULES_PATH and put here path, where you extract zip from contributor git repository. HIT in cmake generate and yout Opencv.SLN file is upgraded.

### Build and deploy opencv

Open from Visual studio 2017 generated opencv.sln file inside opencv32build.

Visual studio just ask you if you want to upgrade toolchain to Visual studio 2017. Cmake just generate 2015 but visual studio upgrade this anyway,

In solution menu you just see all you want to build like on picture.

1. FIRST just select DEBUG, x64 version like on picture, click right mouse on Entire solution and hit BUILD solution like on picture.
2. Second you need to switch from DEBUG to RELEASE and build the solution again. This build also cmake target install, (you can see this under install) where is your installation located.

You can see my release build 114 modules and 0 fails. It should works. And works.

There is located header files in include and x64 libraries and DLL.

This is your own build of opencv with specification to HW and software for any new release of Visual Studio. This is 2017.

NOW use same setting like to install Visual studio project as usual. Headers and libs you have build.

## Simple Installation opencv Visual Studio 2017

Simple installation of opencv for Visual studio 2017 by image example. Easy and fast way to start coding in opencv by Nuget packages. If your plan is going to use cuda, or some advanced opencv settings. You should install opencv in different way. For example build your own libs according to HW you have available.. This will be my next tutorial.

If you want to play and make a fun in Visual Studio 2017 this is tutorial for you. All mine tutorials for Opencv is now running just based on NUGET packages installation. For most common purpose is this installation just fine.

### OPENCV VISUAL STUDIO 2017 need vs141 libs version

This doesn't make any sense to me Visual Studio 2012 was 110 lib version, 2013 was 113, Visual studio 2015 has VS140 lib version and now the big step forward. Visual studio 2017 you are finding DLL, libs etc compatible with VS141.. What a big step from previous visual studio :)

### Instal opencv under 1 minute

On nuget package side find nuget distribution. This will install library, DLL and header just into your project. Find the right NUGET for you on https://www.nuget.org/packages

Find support VS141
openCV for Windows
320.1.1-vs141

I just try this nuget and It works fine for 86 version build in VS 2017.
Install-Package opencv.win.native -Pre
x86/x64 builds for OpenCV 3.2 release for Visual Studio 2017

Install Step by step

• Create empty C++ project
• Click on source file on Right and add new source.cpp file.
• Add source code. You will see unresolved dependencies and other staff. Just because your functions try to find where are they implemented.

• Your Visual Studio 2017 needs NUGET packages extension installed. The installation of VS17 is more modular than before, but NUGET are most common extension.
• In tools - Nuget Packages Manager - open Package Manager Console
• Write here simple Install-Package opencv.win.native -Pre     OR any others you find
• On second picture you can see that Nuget is installed

Now, You can compile at least 86 build and released without any problem. I am tested just this packages. The next tutorial show how to build your own libraries.. See you

## Helping on stackoverflow

Maybe some of this ideas are useful also for you.

### Question was??

You have one people income stream and you need to determine exit points, Right, Left or just straight. Use detection and statistic or whole tracking.

### tracking is best to solve this problem, I think

This is my ansfer.
The best accurate way is to use tracking algorithm instead of statistic appearance counting of incoming  people and detection occurred left right and middle..
You can use extended statistical models.. That produce how many inputs producing one of the outputs and back validate from output detection the input.

My experience is that tracking leads to better results than approach above. But is also little bit complicated. We talk about multi target tracking when the critical is match detection with tracked model which should be update based on detection. If tracking is matched with wrong model. The problems are there.
[![enter image description here][1]][1]

Here on youtube I developed some multi target tracker by simple LBP people detector, but multi model and kalman filter for tracking. Both capabilities are available in opencv. You need to when something is detected create new kalman filter for each object and update in case you match same detection. Predict in case detection is not here in frame and also remove the Kalman i it is not necessary to track any more.
1 Detect
2 Match detections with kalmans, hungarian algorithm and l2 norm. (for example)
3 Lot of work. Decide if kalman shoudl be established, remove, update, or results is not detected and should be predicted. This is lot of work here.
Pure statistic approach is less accurate, second one is for experience people at least one moth of coding and 3 month of tuning.. If you need to be faster and your resources are quite limited. You can by smart statistic achieve your results by pure detection much faster and little bit less accurate. People are judge the image and video tracking even multi target tracking is capable to beat human. Try to count and register each person in video and count exits point. You are not able to do this in some number of people. It is really repents on, what you want, application, customer you have, and results you show to customers. If this is 4 numbers income, left, right, middle and your error is 20 percent is still much more than one bored small paid guard should achieved by all day long counting..

You can find on my BLOG Some dataset for people detection and car detection on my blog same as script for learning ideas, tutorials and tracking examples..
[Opencv blog tutorials code and ideas][2]

[2]: https://funvision.blogspot.com/

## Support vector machine with Histogram of oriented gradient trained near online, and tracker.

Build the SVM detector based on HOG feature is relatively simple process. When is not necessary to be robust and detector is focused only on one object. You can build this by combine several OPENCV available tutorials and source codes distributed in Opencv samples. There is maybe one think what is not natural and cannot be taken from example and tutorials. Train set in online training is only 20 positive images warp over positive windows and 30 random negative samples.

What do you think  ?

### Tutorial on SVM with HOG, tracker soon

This is just and example. Tutorial will be available later.. Code is little bit complex.

## Research results in vibration control, Time delay systems and control theory

How force to follow through the Ph.D. thesis. This is my background.  Miles away from the images, video and machine learning. At the end, All is the same. Math, math is the connection and again Algebra, little bit optimization and math again.

I was focus mainly to vibration control, modern control theory like H_inf optimization, MIMO systems and system with time delay. All started connected to project with Boeing ACFA 2020. Flexible crazy new construction that needs to upgrade the control law according to this new concept.
Another crazy project was related to time delay, signal shapers, vibration control and resonators.
Love it.

### Is this related to computer vision ?

No, i felt with love with computer vision for many reason. You can see the result. You can achieve and build something by your own skills. You dont need to construct the machine to drive and test something and the computer vision is really cool combination of programming skills, algorithm optimizations, smart solutions, machine learning, math and much more..
Maybe, I visit to many areas to be close to perfect state in if only one of them. This unique knowledge connections give me so much more that just stay on one topics for the whole life.

### Impacted journals

[1] Vyhlidal, Tomas, Kucera, Vladimir, Hromcik, Martin, :Signal shaper with a distributed delay: Spectral analysis and design , In: AUTOMATICA , 2013 , pp.: 3484-3489 , ISSN.: 0005-1098 ,WOS:000326553300038 , Web of Science® Times Cited: 2
[2] Vyhlidal, Tomas, Olgac, Nejat, Kucera, Vladimir, :Delayed resonator with acceleration feedback - Complete stability analysis by spectral methods and vibration absorber design , In: JOURNAL OF SOUND AND VIBRATION , 2014 , pp.: 6781-6795 , ISSN.: 0022-460X
[3] Vyhlidal T., Hromcik M., Kucera V. Anderle M., On feedback architectures with zero vibration signal, Submited to IEEE TAC, accepted
[4] Kucera V., Pilbaurer D. ,Vyhlidal T., Olgac N, “Extended Delayed Resonators Implementation aspects and experimental verification” IFAC Mechatronics
[5] Kucera V., Vyhlidal T. “Stability analysis of double delayed resonator” Final revision

### Conference papers

[6] Sipahi R.,Kucera V,Vyhlidal T.,Stability Analysis and Control Design of a Vibration Control System with Uncertain and Tunable Delays, Accepted 2015 IFAC Workshop on Time Delay Systems
[7] Vyhlidal, Tomas; Hromcik, Martin; Kucera, Vladimir Inverse signal shapers in effective feedback architecture Conference: European Control Conference (ECC) Location: ETH Zurich, Zurich, SWITZERLAND Date: JUL 17-19, 2013
[8] Double oscillatory mode compensation by inverse signal shaper with distributed delays By: Vyhlidal, Tomas; Hromcik, Martin; Kucera, Vladimir; et al. Book Group Author(s): IEEE Conference: 13th European Control Conference (ECC) Location: Univ Strasbourg, Strasbourg, FRANCE Date:JUN 24-27, 2014

[9] Vyhlidal, Tomas, Kucera, Vladimir, Hromcik, Martin, :Spectral features of ZVD shapers       with lumped and distributed delays , In: 2013 9TH ASIAN CONTROL CONFERENCE      (ASCC) , 2013 , pp.: , ISBN.: 978-1-4673-5767-8 ,WOS:000333734900326
[10] Haniš, T. - Kučera, V. - Hromčík, M.: Low Order H∞Optimal Control for ACFA Blended Wing Body. In: 4th European Conference for Aerospace Sciences. Paříž: Eucass, 2011, art. no. 604,
[11] Kučera, V. - Hromčík, M. - Vyhlídal, T.: Delay-Based Shapers for Controlling Vibration in Future Aircraft. In: Proceedings of the 13th Mechatronics Conference. Linz: Universität Linz, 2012, p. 267-272. ISBN 978-3-99033-042-5
[12] Kucera, Vladimir; Hromcik, Martin; Vyhlidal, Tomas, Experimental comparison of signal shapers with lumped and distributed delays International Conference on Process Control (PC)Location: SLOVAKIA Date: JUN 18-21, 2013
[13] Hanis T.,Kucera V.,Hromcik M.” Low Order H_inf Optimal Control for ACFA Blended Wing Body”,4th European Conference for Aerospace Sciences, Saint Petersburg 2011
[14]Kucera V., Hromcik M. “Delay-based input command shapers: frequency properties and finite-dimensional alternatives”,Proceedings of the 18th IFAC World Congress, Milan 2011
[15]Kucera V., Hromcik M. “Delay-based input shapers in feedback interconnections”,Proceedings of the 18th IFAC World Congress, Milan 2011
[16]Kucera V., Hromcik M. “Delay-based signal shapers and ACFA 2020 BWB aircraft FCS”, 4th European Conference for Aerospace Sciences, Saint Petersburg 2011
[17]Kucera V., Hromcik M., “Feed-Forward Delay-based Input Shaper Confrontation With Active Feedback Control”,The 6th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications, Prague 2011
[18]Kucera V., Hromcik M.,”Signal Shapers for BWB Aircraft Control”,Proceedings of the 18th International Conference on Process Control, Tatranská Lomnica, 2011
[19], Vyhlidal T., Kucera V., Hromcik M.”Input Shapers with Uniformly Distributed delays”, Accepted to 10-th IFAC workshop on Time Delay Systems, Boston, 2012

## Binary Convolutional neural network by XNOR.AI

Great idea to save memory and computation by different type number representation. Convolutional neural network are expensive for its memory needs, specific HW and computational power. This simple trick is able to bring this networks to less power devices..

### Unique solution ?

I am thinking about this. Let me know in comments how unique is this solution. In automotive industry on embedded HW solutions close to this one already exist.. In convolution layers is this unique, I guess.
In GPU there is different types of registers to be able calculate FLOAT16 faster than FLOAT32 bit representation. This basically brings something like represent Real 32 bits number as Binary number. This bring the 32x memory saving information. Say in other way, They are introduce approximation of    Y=WX
This could be input vector x multiply by w weight metrics for each
layer. W and X are real number. Float32 or Float 16. The approximation looks like
Y=αβWX
alpha is scalar numbers, beta is matrix and W and X are just binary.Lower precision but maybe enough power for some applications. Instead of approximate already learned network BWN. They bring this into the learning of the network. Compute convolution layers during the forward pass is also much more effective, that less power CPU devices could handle the problem in real time.

### Binary representation Binary field in C

I am going more deeper into this technology.. Let me introduce some operation with binary arrays in C.. This is exciting area. Not so special. Maybe you are programming only in higher languages like Java, python and how to set only one particular bit in C should be interesting for you. It is for me.. There is no direct support for this..

int A[2];  when your int is size of 32 bit, You have two integers array.. You can store 64 bits. If you represented image in only black and white is quite limited to represent good features. If you represented this for each color and convolution filters. The representation is not so boring..
One convolution layer should be stored in memory size / 32.

int A[2];
Set first bit of A
setBit(A, 1);

Set fifth bit of A
setBit(A, 5);

This should work only for 32 bit int. In this world we have 16, 31 and maybe more int representations...

void setBit(int *Field,int n) {
Field[n / 32] |= 1 << (n % 32);
}

void clearBit(int *Field, int n) {
Field[n / 32] &= ~(1 << (n % 32));
}

n/32 is index in array, n%32 is bit position in Filed[i]
This 1 is like only move this 00001 to the right place by <<  bit shift. The magic is
based on the conditions in this expression shift 1 Hexa, 00001 B to the right position.
000000000 set 0001 to n = 3 ->000001000

This is just example how to work with binary arrays in C. I have a plan to implement this in pure C and introduce more from this implementation..
By this principle is possible to write print of the array and almost all you need.

#### Resources

Look at the founded company using this technology
https://xnor.ai

#### Check also github

There is some integration into the TORCH, I think also Yolo, darknet but this need some extension for already existing layers.
https://github.com/allenai/XNOR-Net

## Intel invest in computer vision

Intel more and more focus on computer vision technology, software and mainly related  hardware.  In past, Intel just bought itseez company which is mainly known for Opencv and other related computer vision activity. Now, intel acquire Mobileye, which is another step to go forward against competitors like Nvidia in future intelligent autonomous cars.

### History of Mobileye

Since 1999 the Mobileye focus on vision-safety technology related to driving assistance like Advanced Driver Assistance System. ADAS. The approach is different and important for this acquisition.  When google try to build cheaper LIDAR sensor for autonomous vehicles, company like Mobileye believe that Mono vision camera is everything you need to understand the scene. Cheaper than Lidar and cheaper than stereo. Mono vision as a primary source of the information that can handle, traffic-sign, pedestrians, vehicles is key factor of success of this company.  This is little bit strange for me personally.  Stereo camera make sense like human eyes and some animals need to handle their mono vision eye placement by strange movement of the head. Simply, I think that two cameras, eyes are better than the only one. It is not a mystery, that number of eyes in nature is most probably 2.

This is citation from Mobileye web site

"All depth-perception cues for farther distances – such as perspective, shading, texture, and motion cues, that the human visual system uses in order to understand the visual world – are interpreted by a single eye"

Honestly, I do not know. Also 360-degree surround-view mono-vision sensing is not the same as stereo approach. It is more or less something like Samsung 360 cameras than stereo that can handle depth by triangulation.

### Computer vision Hardware and Software

Mobileye focus also on specific HW for computer vision called EyeQ chip more complex than for example  CEVA-XM4 processor. This processor category is System on Chip (SOC), specialized for car sensing.. There is 8xCPU, 18xComputer Vision Processors, Security module, Wide band Sensor interface, IO controller and iSRAM wih Boot ROM memory.. All is certified for automotive whit passive cooling..
Check all the other specification on website..

Good luck to Mobileye under intel. Great that deeplearning is not a place only for strong cuda machines.. Deep learning is for distributive learning between many agents. The training parts.. Classification could be handled by low power specific processors like EyeQ or CEVA. Where smarter architectures, vectors, specific float registers or XNOR convolutional neural nets with only binary representation could beat down brutal cuda power.

BTW XNOR is great, technology.. Check article later this month.

## Opencv in Visual studio 2017

This tutorial show you how to use and install OPENCV 3 + in Visual Studio 2017. It is more hack than proper install. Use Nuget packages in package console is simple installation of opencv without setting of environmental variables and additional troubles with installation. All Visual Studio Versions needs its own build of Opencv library also the Nuget packages. VS 2012 needs version of libs V110. If you match following pictures together you have my point.. VC12 libs is version for Visual studio 2013. VC14 is lib version for Visual Studio 2015. Finally Visual Studio 2017 needs platform tool set and lib version V141. In opencv prebuild libs we need to wait to VC141. Hopefully. I never get the point of this naming convention.. Visual studio year and V140 141 110 120 but.. Current release is more confusing than the others.

Opencv library has prebuild versions in that location after install.

If you want to VC141 libs you need to build by CMAKE and Visual studio 2017 like in my older tutorial.. Same approach but switch visual studio version..
Build opencv for new version of Visual Studio in cmake

### Opencv visual studio 2017 and older toolset

Best approach is set the tool set in Visual studio project to V140 and use prebuild version of opencv libs for VS 2015 or use Nuget packages in simple 2 minutes installation also for version 2015 and enjoy coding in Visual studio 2017 until somebody solve and build the libs in VS 2017.

Follow and enjoy coding. Build own opencv is necessary in case using the embedded and specific HW. In case of coding on windows its better to wait for prebuild libs than install cmake on windows machine..

### Opencv nuget package console Visual studio 2017

Previous step are really simple. Just openc empty project in C++ and add one source file source.cpp.. There is no magic and nothing to block you to start coding.. Just follow the pictures and let me know if any problems occurred.
Add some program, for example this for capture video from web camera.. If you try to build and link.. You will failed with no resolve dependency.. Now is the time to add. Opencv library.. Nuget is the simplest option..


#include "opencv2\highgui.hpp"
#include "opencv2\imgproc.hpp"
#include "opencv2\objdetect\objdetect.hpp"
#include "opencv2\videoio\videoio.hpp"
#include "opencv2\imgcodecs\imgcodecs.hpp"

#include <vector>
#include <stdio.h>
#include <Windows.h>
#include <iostream>

using namespace cv;
using namespace std;

int main(int argc, const char** argv)
{

VideoCapture cap(0);

for (;;)
{

if (!cap.isOpened()) {

cout << "Video Capture Fail" << endl;

break;
}
else {
Mat img;
cap >> img;
namedWindow("Image", WINDOW_AUTOSIZE);
imshow("Image", img);
int key2 = waitKey(20);
}

}

return 0;
}


Now open the Nuget packages console TOOLS-NUGET Package Manager and Package Manager console.. Do not afraid of console..

In console just write PM>  Install-Package opencvdefault
ONLY part start with Install..  and wait until your includes in visual studio is resolved and we see that the packages are successfully installed. We are not finished!!

Now, if you try to build and link you will have trouble with linking.. Right click to Project name Opencv and go to general settings and change platform support to v140 to 2015.. If you do not  have this V140 you need to install something like compatibility pack.. platform toolchain for VS140..

Compile, link and run.. Web cam is here.. It is a hack, no proper installation for VS141 but there is no reason for it.. There is no difference. Better hack than build by CMAKE on windows machine if you are not skilled enough.