Please use this identifier to cite or link to this item:
http://cris.utm.md/handle/5014/385
DC Field | Value | Language |
---|---|---|
dc.contributor.author | BURLACU, Alexandru | en_US |
dc.date.accessioned | 2020-04-12T16:41:51Z | - |
dc.date.available | 2020-04-12T16:41:51Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | BURLACU, Alexandru. Overview of computer vision supervised learning techniques for low-data training. In: Electronics, Communications and Computing. Editia a 10-a, 23-26 octombrie 2019, Chişinău. Chișinău, Republica Moldova: Universitatea Tehnică a Moldovei, 2019, p. 44. ISBN 978-9975-108-84-3. | en_US |
dc.identifier.isbn | 978-9975-108-84-3 | - |
dc.identifier.uri | https://ibn.idsi.md/ro/vizualizare_articol/87114 | - |
dc.identifier.uri | http://cris.utm.md/handle/5014/385 | - |
dc.description.abstract | This work is an overview of techniques of varying complexity and novelty for supervised, or rather weakly supervised learning for computer vision algorithms. With the advent of deep learning the number of organizations and practitioners who think that they can solve problems using it also grows. Deep learning algorithms normally require vast amounts of labeled data, but depending on the domain it is not always possible to have a well annotated huge dataset, just think about healthcare. This paper starts with giving some background on supervised, weakly-supervised and then self-supervised learning in general, and in computer vision specifically. Then it goes on describing various methods to ease the need for a big labeled dataset. The paper describes the importance of these methods in fields such as medical imaging, autonomous driving, and even drone autonomous navigation. Starting with simple methods like knowledge transfer it also describes a number of knowledge distillation techniques and ends with the latest methods from self- and semi-supervised methods like Unsupervised Data Augmentation (UDA), MixMatch, Snorkel and adding synthetic tasks to the learning model, thus touching the multi-task learning problem. Finally topics/papers not reviewed yet are mentioned with some commentaries and the paper is closed with a discussions section. This paper does not go into few-shot/one-shot learning, because this another huge sub-domain, with a scope a bit different from the one of weaklysupervised and self-supervised learning. | en_US |
dc.language.iso | en | en_US |
dc.subject | knowledge distillation | en_US |
dc.subject | knowledge transfer | en_US |
dc.subject | self-supervised learning | en_US |
dc.subject | semi-supervised learning | en_US |
dc.subject | weakly-supervised learning | en_US |
dc.title | Overview of computer vision supervised learning techniques for low-data training | en_US |
dc.type | Article | en_US |
dc.relation.conference | Electronics, Communications and Computing | en_US |
item.grantfulltext | open | - |
item.languageiso639-1 | other | - |
item.fulltext | With Fulltext | - |
crisitem.author.dept | Department of Software Engineering and Automatics | - |
crisitem.author.parentorg | Faculty of Computers, Informatics and Microelectronics | - |
Appears in Collections: | Conference Abstracts |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
44-44_8.pdf | 444.69 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.