Content deleted Content added
m Reverted edits by 24.214.26.16 (talk) to last version by Rodw |
Rescuing 1 sources and tagging 1 as dead.) #IABot (v2.0.9.5 |
||
(48 intermediate revisions by 23 users not shown) | |||
Line 1:
{{short description|Organization that shares designs of data center products}}
{{Infobox Organization
| name = Open Compute Project
| image = OpenCompute logo.jpg
| mcaption =
| formation = {{Start date and age|2011}}
|
| type = organisation
| purpose = Sharing designs of [[data center]] products
|
|
|
| website = {{URL|opencompute.org}}
| remarks =
}}
The '''Open Compute Project''' ('''OCP''') is an organization that facilitates the sharing of [[data center]] product designs and industry best practices among companies.<ref name=":0" /><ref name=":1" /> Founded in 2011, OCP has significantly influenced the design and operation of large-scale computing facilities worldwide.<ref name=":0" />
The '''Open Compute Project''' ('''OCP''') is an organization that shares designs of [[data center]] products and best practices among companies, including [[ARM Ltd.|ARM]], [[Meta Platforms|Meta]], [[IBM]], [[Wiwynn]], [[Intel]], [[Nokia]], [[Google]], [[Microsoft]], [[Seagate Technology]], [[Dell]], [[Rackspace]], [[Hewlett Packard Enterprise]], [[NVIDIA]], [[Cisco]], [[Goldman Sachs]], [[Fidelity Investments|Fidelity]], [[Lenovo]] and [[Alibaba Group]].<ref>{{cite magazine|url=https://www.wired.com/2015/03/facebook-got-even-apple-back-open-source-hardware/|title=How Facebook Changed the Basic Tech That Runs the Internet|magazine=Wired|date=11 Apr 2015|last1=Metz|first1=Cade}}</ref><ref>{{Cite web|url=http://www.opencompute.org/about/ocp-incubation-committee/|title=Incubation Committee|website=Open Compute|access-date=2016-08-19}}</ref><ref>{{Cite web|url=https://www.opencompute.org/membership/membership-organizational-directory|title = Open Compute Project}}</ref>▼
▲
==Structure==
[[File:Open Compute Server Front.jpg|thumb|Open Compute V2 Server]]
[[File:Open Compute 1U Drive Tray Bent.jpg|thumb|Open Compute V2 Drive Tray,<br />2nd lower tray extended]]
The Open Compute Project Foundation is a [[501(c)(6)]] non-profit incorporated in the state of Delaware, United States.
As of July 2020, there are A current list of members can be found on the [http://opencompute.org/membership/membership-organizational-directory opencompute.org] website.
== History ==
The Open Compute Project began in Facebook as an internal project in 2009 called "Project Freedom". The hardware designs and engineering team were led by Amir Michael (Manager, Hardware Design)<ref>{{Cite web|date=2009-11-27|title=Facebook Follows Google to Data Center Savings|url=https://www.datacenterknowledge.com/archives/2009/11/27/facebook-follows-google-to-data-center-savings|access-date=2020-12-13|website=Data Center Knowledge|language=en}}</ref><ref>{{Cite web|title=Oxide Computer Company: On the Metal: Amir Michael|url=https://oxide.computer/podcast/on-the-metal-2-amir-michael/|access-date=2020-12-13|website=Oxide Computer Company|language=en}}</ref><ref>{{Cite
== OCP projects ==
Line 28 ⟶ 34:
===Server designs ===
Two years after the Open Compute Project had started, with regards to a more modular server design, it was admitted that "the new design is still a long way from live data centers".<ref>{{Cite
Efforts to advance server compute node designs included one for [[Intel]] processors and one for [[Advanced Micro Devices|AMD]] processors. In 2013, [[Calxeda]] contributed a design with [[ARM architecture]] processors.<ref>{{Cite web |title= ARM Server Motherboard Design for Open Vault Chassis Hardware v0.3 MB-draco-hesperides-0.3 |first= Tom |last= Schnell |date= January 16, 2013 |url= http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_ARM_Server_Specification_v0.3.pdf |access-date= July 9, 2013 |archive-url= https://web.archive.org/web/20141023095543/http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_ARM_Server_Specification_v0.3.pdf |archive-date= October 23, 2014 |url-status= dead }}</ref> Since then, several generations of OCP server designs have been deployed: Wildcat (Intel), Spitfire (AMD), Windmill (Intel E5-2600), Watermark (AMD), Winterfell (Intel E5-2600 v2) and Leopard (Intel E5-2600 v3).<ref>{{Cite web |title=Guide to Facebook's Open Source Data Center Hardware|author=Data Center Knowledge|date=April 28, 2016|url=http://www.datacenterknowledge.com/archives/2016/04/28/guide-to-facebooks-open-source-data-center-hardware/|access-date=May 13, 2016}}</ref><ref>{{Cite web |title=Facebook rolls out new web and database server designs|first=The|last=Register|website=[[The Register]] |date=January 17, 2013|url=https://www.theregister.co.uk/2013/01/17/open_compute_facebook_servers/|access-date=May 13, 2016}}</ref>
===
OCP Accelerator Module (OAM) is a design specification for hardware architectures that implement artificial intelligence systems that require high module-to-module bandwidth.<ref name="Ledin 2020 p. ">{{cite book | last=Ledin | first=Jim | title=Modern Computer Architecture and Organization | publisher=Packt Publishing Ltd | publication-place=Birmingham Mumbai | date=2020-04-30 | isbn=978-1-83898-710-7 | page=361}}</ref>
Open Vault storage building blocks offer high disk densities, with 30 drives in a 2U [[Open Rack]] chassis designed for easy [[disk drive]] replacement. The 3.5 inch disks are stored in two drawers, five across and three deep in each drawer, with connections via [[serial attached SCSI]].<ref>{{Cite web |title= Open Vault Storage Hardware V0.7 OR-draco-bueana-0.7 |author= Mike Yan and Jon Ehlen |date= January 16, 2013 |url= http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Open_Vault_Storage_Specification_v0.7.pdf |access-date= July 9, 2013 }}</ref> This storage is also called Knox, and there is also a cold storage variant where idle disks power down to reduce energy consumption.<ref>{{Cite web |title=Under the hood: Facebook's cold storage system|date=May 4, 2015|url=https://code.facebook.com/posts/1433093613662262/-under-the-hood-facebook-s-cold-storage-system-/|access-date=May 13, 2016}}</ref> Another design concept was contributed by Hyve Solutions, a division of [[Synnex]] in 2012.<ref>{{Cite web |title= Hyve Solutions Contributes Storage Design Concept to OCP Community |work= News release |date= January 17, 2013 |url= http://ir.synnex.com/releasedetail.cfm?ReleaseID=733922 |access-date= July 9, 2013 |archive-url= https://web.archive.org/web/20130414055759/http://ir.synnex.com/releasedetail.cfm?ReleaseID=733922 |archive-date= April 14, 2013 |url-status= dead }}</ref><ref>{{Cite web |title= Torpedo Design Concept Storage Server for Open Rack Hardware v0.3 ST-draco-chimera-0.3 |first= Conor |last= Malone |date= January 15, 2012 |url= http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Storage_Server_for_Open_Rack_Specification_v0.3.pdf |access-date= July 9, 2013 |archive-url= https://web.archive.org/web/20130521143229/http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Storage_Server_for_Open_Rack_Specification_v0.3.pdf |archive-date= May 21, 2013 |url-status= dead }}</ref> At the OCP Summit 2016 Facebook together with Taiwanese ODM Wistron's spin-off Wiwynn introduced Lightning, a flexible NVMe JBOF (just a bunch of flash), based on the existing Open Vault (Knox) design.<ref>{{Cite web |title=Introducing Lightning: A flexible NVMe JBOF|first=Chris|last=Petersen|date=March 9, 2016|url=https://code.facebook.com/posts/989638804458007/introducing-lightning-a-flexible-nvme-jbof/|access-date= May 13, 2016}}</ref><ref>{{Cite web |title=Wiwynn Showcases All-Flash Storage Product with Leading-edge NVMe Technology|date=March 9, 2016|url=http://www.wiwynn.com/english/company/newsinfo/23|access-date= May 13, 2016}}</ref>▼
OAM is used in some of AMD's [[AMD Instinct|Instinct]] accelerator modules.
===Rack designs ===▼
The designs for a mechanical mounting system have been published, so that open racks have the same outside width (600 mm) and depth as standard [[19-inch rack]]s, but are designed to mount wider chassis with a 537 mm width (21 inches). This allows more equipment to fit in the same volume and improves air flow. Compute chassis sizes are defined in multiples of an [[Open Rack#OpenU|OpenU]], which is 48 mm, slightly taller than the typical 44mm [[rack unit]].▼
▲===Rack and Power designs ===
▲The designs for a mechanical mounting system have been published, so that open racks have the same outside width (600 mm) and depth as standard [[19-inch rack]]s, but are designed to mount wider chassis with a 537 mm width (21 inches). This allows more equipment to fit in the same volume and improves air flow. Compute chassis sizes are defined in multiples of an [[Open Rack#OpenU|OpenU]] or OU, which is 48 mm, slightly taller than the typical 44mm [[rack unit]]. The most current base mechanical specifications were defined and published by Meta as the Open Rack V3 Base Specification in 2022, with significant contributions from [[Google]] and [[Rittal]].<ref>{{cite web |last1=Charest |first1=Glenn |last2=Mills |first2=Steve |last3=Vorreiter |first3=Loren |title=Open Rack V3 Base Specification |url=https://www.opencompute.org/documents/open-rack-base-specification-version-3-pdf |website=opencompute.org |publisher=Meta |access-date=25 September 2024}}</ref>
At the time the base specification was released, Meta also defined in greater depth the specifications for the [[Rectifier|rectifiers]] and power shelf.<ref>{{cite web |last1=Keyhani |first1=Hamid |last2=Tang |first2=Ted |last3=Shapiro |first3=Dmitriy |last4=Fernandes |first4=John |last5=Kim |first5=Ben |last6=Jin |first6=Tiffany |last7=Mercado |first7=Rommel |title=Open Rack V3 48V PSU Specification Rev: 1.0 |url=https://www.opencompute.org/documents/orv3-48v-psu-spec-rev-1-0-docx-1 |website=opencompute.org |publisher=Meta |access-date=25 September 2024}}</ref><ref>{{cite web |last1=Keyhani |first1=Hamid |last2=Shapiro |first2=Dmitriy |last3=Fernandes |first3=John |last4=Kim |first4=Ben |last5=Jin |first5=Tiffany |last6=Mercado |first6=Rommel |title=Open Rack V3 Power Shelf Rev 1.0 Specification |url=https://www.opencompute.org/documents/ocp-open-rack-v3-power-shelf-rev-1-0-docx-1 |website=opencompute.org |publisher=Meta |access-date=25 September 2024}}</ref> Specifications for the power monitoring interface (PMI), a communications interface enabling upstream communications between the rectifiers and [[Backup battery|battery backup unit]](BBU) were published by Meta that same year, with [[Delta Electronics]] as the main technical contributor to the BBU spec.<ref>{{cite web |last1=Sun |first1=David |last2=Shapiro |first2=Dmitriy |last3=Kim |first3=Ben |last4=Athavale |first4=Jayati |last5=Mercado |first5=Rommel |title=Open Rack V3 48V BBU Specification Rev: 1.4 |url=https://www.opencompute.org/documents/open-rack-v3-bbu-module-spec-1-4-pdf |website=opencompute.org |publisher=Meta |access-date=25 September 2024}}</ref>
Since 2022 however, the power demands of [[AI boom|AI in the data center]] has necessitated higher power requirements in order to fulfill the heavy power demands of newer [[AI accelerator|data center processors]] that have since been released. Meta is currently in the process of updating its Open Rack v3 rectifier, power shelf, battery backup and power management interface specifications to account for these new more powerful AI architectures being used.
In May 2024, at an Open Compute regional summit, Meta and Rittal outlined their plans for development of their High Power Rack (HPR) ecosystem in conjunction with rack, power and cable partners, increasing the power capacity in the rack to 92 kilowatts or more of power, enabling the higher [[Electric power|power needs]] of the latest generation of processors.<ref>{{cite web |last1=Open Compute Project |title=ORv3 High Power Rack (HPR) Ecosystem Solution |url=https://youtube/X5A_uX1vzvg |website=youtube.com |publisher=Youtube |access-date=25 September 2024}}</ref> At the same meeting, Delta Electronics and [[Advanced Energy]] introduced their progress in developing new Open Compute standards specifying power shelf and rectifier designs for these HPR applications.<ref>{{cite web |last1=Open Compute Project |title=Requirements/Considerations of Next Generation ORv3 PSU and Power Shelves |url=https://www.youtube.com/watch?v=7YB08H1ssJc |website=Youtube |date=4 May 2024 |access-date=25 September 2024}}</ref> Rittal also outlined their collaboration with Meta in designing airflow containment, [[busbar]] designs and [[Ground (electricity)|grounding]] schemes to the new HPR requirements.<ref>{{cite web |last1=Open Compute Project |title=ORv3 High Power Rack (HPR) Ecosystem Solution |url=https://www.youtube.com/watch?v=X5A_uX1vzvg |website=Youtube |date=4 May 2024 |access-date=25 September 2024}}</ref>
===Data storage ===
▲Open Vault storage building blocks offer high disk densities, with 30 drives in a 2U [[Open Rack]] chassis designed for easy [[disk drive]] replacement. The 3.5 inch disks are stored in two drawers, five across and three deep in each drawer, with connections via [[serial attached SCSI]].<ref>{{Cite web |title= Open Vault Storage Hardware V0.7 OR-draco-bueana-0.7 |author= Mike Yan and Jon Ehlen |date= January 16, 2013 |url= http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Open_Vault_Storage_Specification_v0.7.pdf |access-date= July 9, 2013 |archive-date= May 21, 2013 |archive-url= https://web.archive.org/web/20130521151714/http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Open_Vault_Storage_Specification_v0.7.pdf |url-status= dead }}</ref> This storage is also called Knox, and there is also a cold storage variant where idle disks power down to reduce energy consumption.<ref>{{Cite web |title=Under the hood: Facebook's cold storage system|date=May 4, 2015|url=https://code.facebook.com/posts/1433093613662262/-under-the-hood-facebook-s-cold-storage-system-/|access-date=May 13, 2016}}</ref> Another design concept was contributed by Hyve Solutions, a division of [[Synnex]] in 2012.<ref>{{Cite web |title= Hyve Solutions Contributes Storage Design Concept to OCP Community |work= News release |date= January 17, 2013 |url= http://ir.synnex.com/releasedetail.cfm?ReleaseID=733922 |access-date= July 9, 2013 |archive-url= https://web.archive.org/web/20130414055759/http://ir.synnex.com/releasedetail.cfm?ReleaseID=733922 |archive-date= April 14, 2013 |url-status= dead }}</ref><ref>{{Cite web |title= Torpedo Design Concept Storage Server for Open Rack Hardware v0.3 ST-draco-chimera-0.3 |first= Conor |last= Malone |date= January 15, 2012 |url= http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Storage_Server_for_Open_Rack_Specification_v0.3.pdf |access-date= July 9, 2013 |archive-url= https://web.archive.org/web/20130521143229/http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Storage_Server_for_Open_Rack_Specification_v0.3.pdf |archive-date= May 21, 2013 |url-status= dead }}</ref> At the OCP Summit 2016 Facebook together with Taiwanese ODM Wistron's spin-off Wiwynn introduced Lightning, a flexible NVMe JBOF (just a bunch of flash), based on the existing Open Vault (Knox) design.<ref>{{Cite web |title=Introducing Lightning: A flexible NVMe JBOF|first=Chris|last=Petersen|date=March 9, 2016|url=https://code.facebook.com/posts/989638804458007/introducing-lightning-a-flexible-nvme-jbof/|access-date= May 13, 2016}}</ref><ref>{{Cite web
===Energy efficient data centers ===
Line 43 ⟶ 60:
===Open networking switches ===
{{See also|SONiC (operating system)}}
On May 8, 2013, an effort to define an open [[network switch]] was announced.<ref>{{Cite web|url=https://www.opencompute.org/news/up-next-for-the-open-compute-project-the-network|title=Up next for the Open Compute Project: The Network|author=Jay Hauser for Frank Frankovsky|date=May 8, 2013|work=Open Compute blog|access-date=June 16, 2019}}</ref> The plan was to allow Facebook to load its own [[operating system]] software onto the switch. Press reports predicted that more expensive and higher-performance switches would continue to be popular, while less expensive products treated more like a [[commodity]] (using the [[buzzword]] "top-of-rack") might adopt the proposal.<ref>{{Cite news |title= Can Open Compute change network switching? |first= David|last= Chernicoff |work= ZDNet |date= May 9, 2013 |url=
The first attempt at an open networking switch by Facebook was designed together with Taiwanese ODM [[Accton Technology Corporation|Accton]] using [[Broadcom Corporation|Broadcom]] Trident II chip and is called Wedge, the Linux OS that it runs is called FBOSS.<ref>{{cite web|title=Facebook Open Switching System (FBOSS) from Facebook|url=https://www.sdxcentral.com/projects/facebook-open-switching-system-fboss/reports/2017/open-source-networking/|website=[[SDxCentral]]|archive-url=https://web.archive.org/web/20181001142442/https://www.sdxcentral.com/projects/facebook-open-switching-system-fboss/reports/2017/open-source-networking/|archive-date=October 1, 2018|via=[[Internet Archive]]}}</ref><ref>{{cite web|url=https://code.facebook.com/posts/681382905244727/introducing-wedge-and-fboss-the-next-steps-toward-a-disaggregated-network/|title=Introducing "Wedge" and "FBOSS," the next steps toward a disaggregated network|website =Meet the engineers who code Facebook|date=June 18, 2014|access-date = 2016-05-13}}</ref><ref>{{cite web|url=https://code.facebook.com/posts/843620439027582/facebook-open-switching-system-fboss-and-wedge-in-the-open/|title=Facebook Open Switching System ("FBOSS") and Wedge in the open|website=Meet the engineers who code Facebook|date=March 10, 2015|access-date = 2016-05-13}}</ref> Later switch contributions include "6-pack" and Wedge-100, based on Broadcom Tomahawk chips.<ref>{{cite web|url=https://code.facebook.com/posts/203733993317833/opening-designs-for-6-pack-and-wedge-100/|title=Opening designs for 6-pack and Wedge 100|website=Meet the engineers who code Facebook|date=March 9, 2016|access-date = 2016-05-13}}</ref> Similar switch hardware designs have been contributed by: [[
===Servers ===
▲The first attempt at an open networking switch by Facebook was designed together with Taiwanese ODM [[Accton Technology Corporation|Accton]] using [[Broadcom Corporation|Broadcom]] Trident II chip and is called Wedge, the Linux OS that it runs is called FBOSS.<ref>{{cite web|title=Facebook Open Switching System (FBOSS) from Facebook|url=https://www.sdxcentral.com/projects/facebook-open-switching-system-fboss/reports/2017/open-source-networking/|website=[[SDxCentral]]|archive-url=https://web.archive.org/web/20181001142442/https://www.sdxcentral.com/projects/facebook-open-switching-system-fboss/reports/2017/open-source-networking/|archive-date=October 1, 2018|via=[[Internet Archive]]}}</ref><ref>{{cite web|url=https://code.facebook.com/posts/681382905244727/introducing-wedge-and-fboss-the-next-steps-toward-a-disaggregated-network/|title=Introducing "Wedge" and "FBOSS," the next steps toward a disaggregated network|website =Meet the engineers who code Facebook|date=June 18, 2014|access-date = 2016-05-13}}</ref><ref>{{cite web|url=https://code.facebook.com/posts/843620439027582/facebook-open-switching-system-fboss-and-wedge-in-the-open/|title=Facebook Open Switching System ("FBOSS") and Wedge in the open|website=Meet the engineers who code Facebook|date=March 10, 2015|access-date = 2016-05-13}}</ref> Later switch contributions include "6-pack" and Wedge-100, based on Broadcom Tomahawk chips.<ref>{{cite web|url=https://code.facebook.com/posts/203733993317833/opening-designs-for-6-pack-and-wedge-100/|title=Opening designs for 6-pack and Wedge 100|website=Meet the engineers who code Facebook|date=March 9, 2016|access-date = 2016-05-13}}</ref> Similar switch hardware designs have been contributed by: [[Edge-Core Networks Corporation]] (Accton spin-off), Mellanox Technologies, Interface Masters Technologies, Agema Systems.<ref>{{cite web|url=http://www.opencompute.org/wiki/Networking/SpecsAndDesigns|title=Accepted or shared hardware specifications|website=Open Compute|access-date = 2016-05-13}}</ref> Capable of running [[Open Network Install Environment]] (ONIE)-compatible [[network operating system]]s such as [[Cumulus Linux]], Switch Light OS by Big Switch Networks, or PICOS by [[Pica8]].<ref>{{cite web|url=http://www.opencompute.org/wiki/Networking/ONIE/NOS_Status|title=Current Network Operating System (NOS) List|website=Open Compute|access-date = 2016-05-13}}</ref> A similar project for a custom switch for the [[Google platform]] had been rumored, and evolved to use the [[OpenFlow]] protocol.<ref>{{Cite news |title= Facebook Rattles Networking World With 'Open Source' Gear |date= May 8, 2013 |first= Cade|last= Metz |work= Wired |url= https://www.wired.com/wiredenterprise/2013/05/facebook_networking/ |access-date= July 9, 2013 }}</ref><ref>{{Cite news |title= Going With the Flow: Google's Secret Switch to the Next Wave of Networking |date= April 17, 2012 |first= Steven|last= Levy |work= Wired |url= https://www.wired.com/wiredenterprise/2012/04/going-with-the-flow-google/ |access-date= July 9, 2013 }}</ref>
Sub-project for [[PCI Mezzanine Card|Mezzanine]] ([[Network interface controller|NIC]]) OCP NIC 3.0 specification 1v00 was released in late 2019 establishing 3 form factors: SFF, TSFF, and LFF .<ref>{{Cite web |title=Server/Mezz - OpenCompute |url=https://www.opencompute.org/wiki/Server/Mezz |access-date=2022-11-09 |website=www.opencompute.org}}</ref><ref>{{Cite web |last=Kumar |first=Rohit |date=2022-05-02 |title=OCP NIC 3.0 Form Factors The Quick Guide |url=https://www.servethehome.com/ocp-nic-3-0-form-factors-quick-guide-intel-broadcom-nvidia-meta-inspur-dell-emc-hpe-lenovo-gigabyte-supermicro/ |access-date=2022-11-09 |website=ServeTheHome |language=en-US}}</ref>
==Litigation==
Line 51 ⟶ 71:
== See also ==
* {{ Annotated link | List of open-source hardware projects }}
* [[Novena (computing platform)]]▼
* {{ Annotated link | Telecom Infra Project }}
* {{ Annotated link | OpenBMC }}
*
==References==
{{Reflist|30em}}
== External links ==
*
* Data Centers
** [https://www.facebook.com/PrinevilleDataCenter/ Prineville Data Center]
Line 70 ⟶ 90:
** [https://www.facebook.com/CloneeDataCenter/ Clonee Data Center (Ireland)]
* Videos
** {{
** {{
** {{
** {{
* Case Studies
** [https://www.xcloudnetworks.com/case-study/ Game publisher builds a cost-efficient, scalable data center and reduces operational complexities with OCP.] {{Webarchive|url=https://web.archive.org/web/20181013220100/https://www.xcloudnetworks.com/case-study/ |date=2018-10-13 }}
{{Facebook navbox|state=collapsed}}
|