Content deleted Content added
Citation bot (talk | contribs) Added arxiv. | Use this bot. Report bugs. | Suggested by Dominic3203 | Category:Deep learning | #UCB_Category 10/48 |
Citation bot (talk | contribs) Added bibcode. Removed URL that duplicated identifier. Removed parameters. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 251/967 |
||
(7 intermediate revisions by 5 users not shown) | |||
Line 1:
{{Short description|Machine learning model family}}
[[File:R-cnn.svg|thumb|272x272px|R-CNN architecture]]
'''Region-based Convolutional Neural Networks (R-CNN)''' are a family of machine learning models for [[computer vision]], and specifically [[object detection]] and localization.<ref name=":0">{{Cite book |last1=Zhang |first1=Aston |title=Dive into deep learning |last2=Lipton |first2=Zachary |last3=Li |first3=Mu |last4=Smola |first4=Alexander J. |date=2024 |publisher=Cambridge University Press |isbn=978-1-009-38943-3 |___location=Cambridge New York Port Melbourne New Delhi Singapore |chapter=14.8. Region-based CNNs (R-CNNs) |chapter-url=https://d2l.ai/chapter_computer-vision/rcnn.html}}</ref> The original goal of R-CNN was to take an input image and produce a set of [[Minimum bounding box|bounding boxes]] as output, where each bounding box contains an object and also the category (e.g. car or pedestrian) of the object. In general, R-CNN architectures perform selective search<ref name=":1">{{Cite journal |last1=Uijlings |first1=J. R. R. |last2=van de Sande |first2=K. E. A. |last3=Gevers |first3=T. |last4=Smeulders |first4=A. W. M. |date=2013-09-01 |title=Selective Search for Object Recognition |url=https://link.springer.com/article/10.1007/s11263-013-0620-5 |journal=International Journal of Computer Vision |volume=104 |issue=2 |pages=154–171 |doi=10.1007/s11263-013-0620-5 |issn=1573-1405|url-access=subscription }}</ref> over feature maps outputted by a CNN.
R-CNN has been extended to perform other computer vision tasks, such as: tracking objects from a drone-mounted camera,<ref>{{Cite news |last=Nene |first=Vidi |date=Aug 2, 2019 |title=Deep Learning-Based Real-Time Multiple-Object Detection and Tracking via Drone |url=https://dronebelow.com/2019/08/02/deep-learning-based-real-time-multiple-object-detection-and-tracking-via-drone/ |access-date=Mar 28, 2020 |work=Drone Below}}</ref> locating text in an image,<ref>{{Cite news |last=Ray |first=Tiernan |date=Sep 11, 2018 |title=Facebook pumps up character recognition to mine memes |url=https://www.zdnet.com/article/facebook-pumps-up-character-recognition-to-mine-memes/ |access-date=Mar 28, 2020 |publisher=[[ZDNET]]}}</ref> and enabling object detection in [[Google Lens]].<ref>{{Cite news |last=Sagar |first=Ram |date=Sep 9, 2019 |title=These machine learning methods make google lens a success |url=https://analyticsindiamag.com/these-machine-learning-techniques-make-google-lens-a-success/ |access-date=Mar 28, 2020 |work=Analytics India}}</ref>
Line 12:
* November 2013: '''R-CNN'''.<ref name=":2" />
* April 2015: '''Fast R-CNN'''.<ref name=":3">{{Cite book |last=Girshick |first=Ross |chapter=Fast R-CNN |date=7–13 December 2015 |title=2015 IEEE International Conference on Computer Vision (ICCV)
* June 2015: '''Faster R-CNN'''.<ref name=":4">{{Cite journal |last1=Ren |first1=Shaoqing |last2=He |first2=Kaiming |last3=Girshick |first3=Ross |last4=Sun |first4=Jian |date=2017-06-01 |title=Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
* March 2017: '''Mask R-CNN'''.<ref name=":5">{{Cite book |last1=He |first1=Kaiming |last2=Gkioxari |first2=Georgia |last3=Dollar |first3=Piotr |last4=Girshick |first4=Ross |chapter=Mask R-CNN |date=October 2017 |title=2017 IEEE International Conference on Computer Vision (ICCV)
* December 2017: '''Cascade R-CNN''' is trained with increasing Intersection over Union (IoU, also known as the [[Jaccard index]]) thresholds, making each stage more selective against nearby false positives.<ref>{{Cite journal |last1=Cai |first1=Zhaowei |last2=Vasconcelos |first2=Nuno |date=2017 |title=Cascade R-CNN: Delving into High Quality Object Detection |arxiv=1712.00726 }}</ref>
* June 2019: '''Mesh R-CNN''' adds the ability to generate a 3D mesh from a 2D image.<ref>{{Cite journal |last1=Gkioxari |first1=Georgia |last2=Malik |first2=Jitendra |last3=Johnson |first3=Justin |date=2019 |title=Mesh R-CNN |url=https://openaccess.thecvf.com/content_ICCV_2019/html/Gkioxari_Mesh_R-CNN_ICCV_2019_paper.html |pages=9785–9795|arxiv=1906.02739 }}</ref>
Line 21 ⟶ 22:
=== Selective search ===
Given an image (or an image-like feature map), '''selective search''' (also called Hierarchical Grouping) first segments the image by the algorithm in (Felzenszwalb and Huttenlocher, 2004),<ref>{{Cite journal |last1=Felzenszwalb |first1=Pedro F. |last2=Huttenlocher |first2=Daniel P. |date=2004-09-01 |title=Efficient Graph-Based Image Segmentation |url=https://link.springer.com/article/10.1023/B:VISI.0000022288.19776.77 |journal=International Journal of Computer Vision |language=en |volume=59 |issue=2 |pages=167–181 |doi=10.1023/B:VISI.0000022288.19776.77 |issn=1573-1405|url-access=subscription }}</ref> then performs the following:<ref name=":1" />
Input: (colour) image ▼
Output: Set of object ___location hypotheses L ▼
▲ '''Input:''' (colour) image
Segment image into initial regions R = {r₁, ..., rₙ} using Felzenszwalb and Huttenlocher (2004)▼
Initialise similarity set S = ∅▼
foreach Neighbouring region pair (rᵢ, rⱼ) do▼
▲ Segment image into initial regions R = {
▲ Initialise similarity set S = ∅
S = S ∪ s(rᵢ, rⱼ)▼
while S ≠ ∅ do▼
S = S ∪ s(r<sub>i</sub>, r<sub>j</sub>)
Merge corresponding regions rₜ = rᵢ ∪ rⱼ▼
▲ '''while''' S ≠ ∅ do
Remove similarities regarding rᵢ: S = S \ s(rᵢ, r∗)▼
Get highest similarity s(r<sub>i</sub>, r<sub>j</sub>) = max(S)
Remove similarities regarding rⱼ: S = S \ s(r∗, rⱼ)▼
Calculate similarity set Sₜ between rₜ and its neighbours▼
R = R ∪ rₜ▼
Extract object ___location boxes L from all regions in R▼
▲ Extract object ___location boxes L from all regions in R
=== R-CNN ===
[[File:R-cnn.svg|thumb|272x272px|R-CNN architecture]]
Given an input image, R-CNN begins by applying selective search to extract [[Region of interest|regions of interest]] (ROI), where each ROI is a rectangle that may represent the boundary of an object in image. Depending on the scenario, there may be as many as {{nobr|two thousand}} ROIs. After that, each ROI is fed through a neural network to produce output features. For each ROI's output features, an ensemble of [[support-vector machine]] classifiers is used to determine what type of object (if any) is contained within the ROI.<ref name=":2">{{Cite journal |last1=Girshick |first1=Ross |last2=Donahue |first2=Jeff |last3=Darrell |first3=Trevor |last4=Malik |first4=Jitendra |date=2016-01-01 |title=Region-Based Convolutional Networks for Accurate Object Detection and Segmentation
{{-}}
=== Fast R-CNN ===
Line 50 ⟶ 51:
[[File:RoI_pooling_animated.gif|thumb|268x268px|RoI pooling to size 2x2. In this example region proposal (an input parameter) has size 7x5.]]
At the end of the network is a '''ROIPooling''' module, which slices out each ROI from the network's output tensor, reshapes it, and classifies it. As in the original R-CNN, the Fast R-CNN uses selective search to generate its region proposals.
{{-}}
=== Faster R-CNN ===
[[File:Faster-rcnn.svg|thumb|Faster R-CNN]]While Fast R-CNN used selective search to generate ROIs, Faster R-CNN integrates the ROI generation into the neural network itself.<ref name=":4" />
{{-}}
=== Mask R-CNN ===
|