QuickCode: Difference between revisions

Content deleted Content added
 
(33 intermediate revisions by 27 users not shown)
Line 1:
{{Infobox website
| name = ScraperWikiQuickCode
| logo = [[File:ScraperWiki logo.svg|250px|The ScraperWiki logo, a [[wheel tractor-scraper]].]]
| screenshot =
| collapsible =
| collapsetext =
| caption =
| url = [http{{URL|https://scraperwikiquickcode.comio/ scraperwiki.com]}}
| type commercial =
| alexa = {{DecreasePositive}} 133,089 ({{as of|2014|4|1|alt=April 2014}})<ref name="alexa">{{cite web|url= http://www.alexa.com/siteinfo/scraperwiki.com |title= Scraperwiki.com Site Info | publisher= [[Alexa Internet]] |accessdate= 2014-04-01 }}</ref><!--Updated monthly by OKBot.-->
| commercialtype = =
| type =
| language = English
| registration =
| owner =
| author =
| launch datelaunch_date =
| current statuscurrent_status = ActiveInactive
| revenue = Sponsored by 4iP<ref name=4ip-invest>{{cite web |author=Jamie Arnold |date=2009-12-01 |title=4iP invests in ScraperWiki |publisher=4iP |url=http://www.4ip.org.uk/2009/12/4ip-invests-in-scraperwiki/ }}</ref>
| content_license = [[GNU Affero General Public License]]<ref>{{cite web|url=https://github.com/sensiblecodeio/custard/blob/master/LICENCE|title=GNU Affero General Public License v3.0 - sensiblecodeio|website=GitHub|access-date=30 December 2017}}</ref>
| slogan = ScraperWiki is a platform for doing data science on the web
| content license = [[Affero General Public License]]<ref>{{cite web |title=ScraperWiki Terms and Conditions |url=http://scraperwiki.com/terms_and_conditions/ }}</ref>
}}
 
'''QuickCode''' (formerly '''ScraperWiki''') iswas a web-based platform for collaboratively building programs to extract and analyze public (online) data, in a [[wiki]]-like fashion. "Scraper" refers to [[screen scraper]]s, programs that extract data from websites. "Wiki" means that any user with [[Computer programming|programming]] experience can create or edit such programs for extracting new data, or for analyzing existing datasets.<ref name=4ip-invest /> The main use of the website is providing a place for [[programmer]]s and [[journalist]]s to collaborate on analyzing public data.<ref>{{cite news |author=Cian Ginty |date=2010-11-19 |title=Hacks and hackers unite to get solid stories from difficult data |publisher=The Irish Times |url=http://www.irishtimes.com/newspaper/finance/2010/1119/1224283709384.html }}</ref><ref>{{cite web |author=Paul Bradshaw |date=2010-07-07 |title=An introduction to data scraping with Scraperwiki |publisher=Online Journalism Blog |url=http://onlinejournalismblog.com/2010/07/07/an-introduction-to-data-scraping-with-scraperwiki/ }}</ref><ref>{{cite news |author=Charles Arthur |date=2010-11-22 |title=Analysing data is the future for journalists, says Tim Berners-Lee |publisherwork=The Guardian |url=httphttps://www.guardiantheguardian.co.ukcom/media/2010/nov/22/data-analysis-tim-berners-lee }}</ref><ref>{{cite news |author=Deirdre McArdle |date=2010-11-19 |title=In The Papers 19 November |publisher=ENN |url=http://www.enn.ie/story/show/10125973 }}</ref><ref>{{cite web |date=2010-11-15 |title=Journalists and developers join forces for Lichfield ‘hack'hack day’day' |publisher=The Lichfield Blog |url=http://thelichfieldblog.co.uk/2010/11/15/journalists-and-developers-join-forces-for-lichfield-hack-day/ |access-date=2010-12-09 |archive-date=2010-11-24 |archive-url=https://web.archive.org/web/20101124082507/http://thelichfieldblog.co.uk/2010/11/15/journalists-and-developers-join-forces-for-lichfield-hack-day/ |url-status=dead }}</ref><ref>{{cite news |author=Alison Spillane |date=2010-11-17 |title=Online tool helps to create greater public data transparency |publisher=Politico |url=http://politico.ie/index.php?option=com_content&view=article&id=6906:online-tool-helps-to-create-greater-public-data-transparency&catid=193:science-tech&Itemid=880 }}</ref>
 
The service was renamed circa 2016, as "it isn't a wiki or just for scraping any more".<ref name="ScraperWiki">{{cite web|url=https://scraperwiki.com/|title=ScraperWiki|access-date=7 February 2017}}</ref> At the same time, the eponymous parent company was renamed 'The Sensible Code Company'.<ref name="ScraperWiki" />
 
 
==Scrapers==
Scrapers are created using a browser based IDE or by connecting via SSH to a server running [[GNU/Linux]]. They can be programmed using a variety of programming languages, including [[Perl]], [[Python (programming language)|Python]], [[Ruby (programming language)|Ruby]], [[JavaScript]] and [[R (programming language)|R]].
 
==History==
ScraperWiki was founded in 2009 by [[Julian Todd]] and Aidan McGuire. It was initially funded by 4iP, the venture capital arm of TV station [[Channel 4 Television Corporation|Channel 4]]. Since then, it has attracted an additional £1 Million round of funding from Enterprise Ventures.
 
[[Aidan McGuire]] is the [[chief executive officer]] of The Sensible Code Company
[[Francis Irving]] is the Chief Executive Officer of ScraperWiki.<ref>[http://blog.scraperwiki.com/2012/03/09/from-cms-to-dms-c-is-for-content-d-is-for-data/ From CMS to DMS: C is for Content, D is for Data] from the ScraperWiki Data Blog</ref>
 
==See also==
Line 41 ⟶ 36:
 
==External links==
* {{Official website|httphttps://scraperwikiquickcode.comio/}}
* [https://github.com/sensiblecodeio/custard github repository of custard]
 
[[Category:Collaborative projects]]
[[Category:WikisWiki software]]
[[Category:Social information processing]]
[[Category:DataWeb collectionanalytics]]
[[Category:Mashup (web application hybrid)]]
[[Category:Web scraping]]
[[Category:Software using the GNU Affero General Public License]]
 
 
{{websitewiki-stub}}