
Local, instructorled live Computer Vision training courses demonstrate through interactive discussion and handson practice the basics of Computer Vision as participants step through the creation of simple Computer Vision apps
Computer Vision training is available as "onsite live training" or "remote live training" Onsite live training can be carried out locally on customer premises in Norge or in NobleProg corporate training centers in Norge Remote live training is carried out by way of an interactive, remote desktop
NobleProg Your Local Training Provider.
Machine Translated
Testimonials
Den praktiske tilnærmingen
Kevin De Cuyper
Kurs: Computer Vision with OpenCV
Machine Translated
Den enkle bruken av VideoCapture-funksjonaliteten til å skaffe videobilder fra bærbar kamera.
HP Printing and Computing Solutions, Sociedad Limitada Unipe
Kurs: Computer Vision with OpenCV
Machine Translated
Jeg likte instruksjonene fra treneren om hvordan jeg bruker verktøyene. Dette er noe som ikke kan hentes fra internett og er veldig nyttige.
HP Printing and Computing Solutions, Sociedad Limitada Unipe
Kurs: Computer Vision with OpenCV
Machine Translated
Jeg likte instruksjonene fra treneren om hvordan jeg bruker verktøyene. Dette er noe som ikke kan hentes fra internett og er veldig nyttige.
HP Printing and Computing Solutions, Sociedad Limitada Unipe
Kurs: Computer Vision with OpenCV
Machine Translated
Det var lett å følge.
HP Printing and Computing Solutions, Sociedad Limitada Unipe
Kurs: Computer Vision with OpenCV
Machine Translated
Computer Vision Underkategorier
Computer Vision Kursplaner
Dette kurset utforsker anvendelsen av Caffe som en Deep læringsramme for bildegjenkjenning ved bruk av MNIST som eksempel
Publikum
Dette kurset passer for Deep Learning forskere og ingeniører som er interessert i å bruke Caffe som rammeverk.
Etter fullført kurs vil delegatene kunne:
- forstå Caffe sin struktur og distribusjonsmekanismer
- utføre installasjons- / produksjonsmiljø / arkitekturoppgaver og konfigurasjon
- vurdere kodekvalitet, utfør feilsøking, overvåking
- implementere avansert produksjon som treningsmodeller, implementere lag og logging
Some of Marvin's video applications include filtering, augmented reality, object tracking and motion detection.
In this instructor-led, live course participants will learn the principles of image and video analysis and utilize the Marvin Framework and its image processing algorithms to construct their own application.
Format of the Course
- The basic principles of image analysis, video analysis and the Marvin Framework are first introduced. Students are given project-based tasks which allow them to practice the concepts learned. By the end of the class, participants will have developed their own application using the Marvin Framework and libraries.
Publikum
Dette kurset er rettet mot ingeniører og arkitekter som søker å benytte seg av OpenCV til prosjekter med datasyn
This instructor-led, live training (online or onsite) is aimed at software engineers who wish to program in Python with OpenCV 4 for deep learning.
By the end of this training, participants will be able to:
- View, load, and classify images and videos using OpenCV 4.
- Implement deep learning in OpenCV 4 with TensorFlow and Keras.
- Run deep learning models and generate impactful reports from images and videos.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Kursets format
- Dette kurset introduserer tilnærminger, teknologier og algoritmer som brukes i feltet mønster matching som det gjelder Machine Vision .
I denne instruktørledede, liveopplæringen, vil deltakerne lære det grunnleggende om Computer Vision når de går gjennom opprettelsen av et sett med enkel Computer Vision-applikasjon ved hjelp av Python .
Ved slutten av denne opplæringen vil deltakerne kunne:
- Forstå det grunnleggende i Computer Vision
- Bruk Python til å implementere Computer Vision-oppgaver
- Bygg sine egne ansikts-, objekt- og bevegelsesdeteksjonssystemer
Publikum
- Python programmerere som er interessert i Computer Vision
Kursets format
- Delforelesning, deldiskusjon, øvelser og tung praktisk øvelse
The hardware used in this lab includes Rasberry Pi, a camera module, servos (optional), etc. Participants are responsible for purchasing these components themselves. The software used includes OpenCV, Linux, Python, etc.
By the end of this training, participants will be able to:
- Install Linux, OpenCV and other software utilities and libraries on a Rasberry Pi.
- Configure OpenCV to capture and detect facial images.
- Understand the various options for packaging a Rasberry Pi system for use in real-world environments.
- Adapt the system for a variety of use cases, including surveillance, identity verification, etc.
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Note
- Other hardware and software options include: Arduino, OpenFace, Windows, etc. If you wish to use any of these, please contact us to arrange.
Keras is a high-level neural networks API for fast development and experimentation. It runs on top of TensorFlow, CNTK, or Theano.
This instructor-led, live training (online or onsite) is aimed at developers who wish to build a self-driving car (autonomous vehicle) using deep learning techniques.
By the end of this training, participants will be able to:
- Use computer vision techniques to identify lanes.
- Use Keras to build and train convolutional neural networks.
- Train a deep learning model to differentiate traffic signs.
- Simulate a fully autonomous car.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Publikum
Dette kurset er rettet mot ingeniører og utviklere som søker å utvikle datavisjonsapplikasjoner med SimpleCV.
This instructor-led, live training (online or onsite) is aimed at developers who wish to build hardware-accelerated object detection and tracking models to analyze streaming video data.
By the end of this training, participants will be able to:
- Install and configure the necessary development environment, software and libraries to begin developing.
- Build, train, and deploy deep learning models to analyze live video feeds.
- Identify, track, segment and predict different objects within video frames.
- Optimize object detection and tracking models.
- Deploy an intelligent video analytics (IVA) application.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
This instructor-led, live training (online or onsite) is aimed at backend developers and data scientists who wish to incorporate pre-trained YOLO models into their enterprise-driven programs and implement cost-effective components for object-detection.
By the end of this training, participants will be able to:
- Install and configure the necessary tools and libraries required in object detection using YOLO.
- Customize Python command-line applications that operate based on YOLO pre-trained models.
- Implement the framework of pre-trained YOLO models for various computer vision projects.
- Convert existing datasets for object detection into YOLO format.
- Understand the fundamental concepts of the YOLO algorithm for computer vision and/or deep learning.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.