Objective Revision Evaluation Service: Difference between revisions

Content deleted Content added
m Undo revision 16724506 by 190.46.219.138 (talk)
m Reverted changes by 209.171.88.99 (talk) to last version by MathXplore: unexplained content removal
 
(16 intermediate revisions by 11 users not shown)
Line 1:
{{MovedToMediaWiki}}
<div style="float: right;">
{{inline figure
| filename = Objective Revision Evaluation Service logo.svg|
| size=250px
| id=The ORES Logo
}}
</div>
 
{{list subpages}}
The '''Objective Revision Evaluation Service''' ('''ORES''') is a web service that provides [[:en:machine learning|machine learning]] [[:en:as a service|as a service]] for Wikimedia Projects. The system is designed to help automate critical wiki-work -- for example, vandalism detection and removal. This service is developed as part of the [[Special:MyLanguage/Research:Revision scoring as a service|revision scoring as a service project]].
 
By keeping contribution ''open'', but being good at quality control, open knowledge projects maximize productivity and quality -- and this works for large wikis that are well supported by quality control tools (e.g., English and German Wikipedia), but remain a burden for small wikis. ORES is intended to provide a generalized service to support quality control and curation work in ''all wikis''.
 
== Models ==
The primary way that a user of ORES will interact with the system is by asking ORES to apply a 'scorer model' to a particular revision.
 
=== Support table ===
This table provides a summary overview of which models are supported in which wikis. The sections below discuss these models in more detail.
<center>
{{/Support table}}
</center>
<center>{{Clickable button|Objective Revision Evaluation Service/Get support|'''Add your wiki'''|iconPrimary=ui-icon-plusthick|class=ui-button-blue ui-button-large}}</center>
 
=== Edit quality models ===
<div style="float: right;">
{{inline figure
| filename = ORES edit quality flow.svg
| size = 351px
| id = ORES edit quality flow
| caption = A descriptive diagram of edits flowing from "The Internet" to Wikipedia depicts the "unknown" quality of edits before ORES and the "good", "needs review", "damaging" labeling that is possible after ORES is made available.
}}
</div>
One of the most critical concerns about Wikimedia's open projects is the review of potentially damaging contributions. There's also the need to identify good-faith contributors (who may be inadvertently causing damage) and offering them support. These models intended to make the work of filtering through the recentchanges feed easier. The [[#ORES edit quality flow]] image shows how the stream of edits can be labeled as "good", "needs review", and "damaging", by the machine learning models.
 
*<code>[[ORES/damaging|damaging]]</code> -- predicts whether or not an edit causes damage. The higher the "true" probability, the more likely the edit was damaging.
*<code>[[ORES/goodfaith|goodfaith]]</code> -- predicts whether an edit was saved in good-faith. The higher the "true" probability, the more likely the edit was saved in good faith.
*<code>[[ORES/reverted|reverted]]</code> -- predicts whether an edit will eventually be reverted. The higher the "true" probability, the more likely the edit will be reverted.
<br style="clear: right;" />
 
=== Article quality models ===
<div style="float: right;">
{{inline figure
| filename = Article quality and importance.wp10bot.enwiki.png
| size = 350px
| id = English Wikipedia assessment table
| caption = A screenshot of the English Wikipedia assessment table generated by WP 1.0 bot is presented. This screenshot was taken on Dec 23rd, 2014.
}}
</div>
The quality of encyclopedia articles is a core concern for Wikipedians. Currently, some of the large Wikipedias roughly follow the [[:en:WP:Wikipedia 1.0|Wikipedia 1.0]] assessment rating scale when evaluating the quality of articles. Having these assessments is very useful because it helps us gauge our progress and identify missed opportunities (e.g., popular articles that are low quality). However, keeping these assessments up to date is challenging, so coverage is inconsistent. This is where the <code>wp10</code> machine learning model comes in handy. By training a model to replicate the article quality assessments that humans perform, we can automatically assess every article and every revision with a computer. This model has been used to help WikiProjects triage re-assessment work and to explore the editing dynamics that lead to article quality improvements.
*<code>[[ORES/wp10|wp10]]</code> -- predicts the Wikipedia 1.0 assessment class of an article or draft
 
<br style="clear: right;"/>
 
== API service ==
<div style="text-align: center;font-size: 2em;">See https://ores.wikimedia.org for information on how to use the API service.</div>
 
If you're querying the service about a large number of revisions, it's recommended to batch 50 revisions in each request as described below.{{clarify}} It's acceptable to use up to four parallel requests.
 
== Web interface ==
There is a basic web interface for ORES at https://ores.wikimedia.org/ui.
 
== Licensing ==
ORES, revscoring and related software that we develop is freely available under an [[:en:MIT license|MIT license]]. All scores produced by the service are licensed to the public ___domain ([//creativecommons.org/publicdomain/zero/1.0/ CC0].)
 
== False positives ==
False positives can be reported at [[Research:Revision scoring as a service/Misclassifications/Edit quality]].
 
== See also ==
* [[Research:Revision scoring as a service]]
* [[Special:MyLanguage/Wiki labels|Wiki labels]]
* [https://github.com/wiki-ai/ores ores] repository on GitHub
* [https://github.com/wiki-ai/ores-wikimedia-config ores-wikimedia-config] repository on GitHub
 
[[Category:Revision scoring as a service]]