IBM Parallel Sysplex: Difference between revisions

Content deleted Content added
RosePet (talk | contribs)
m URL for IBM WLM were 404, found another which were more general and also mentions WLM for AIX
Rescuing 1 sources and tagging 0 as dead.) #IABot (v2.0.9.5
 
(10 intermediate revisions by 8 users not shown)
Line 4:
 
==Sysplex==
In 1990, [[IBM]] [[mainframe computer]]s introduced the concept of a '''Systems Complex''', commonly called a '''Sysplex''', with [[MVS]]/ESA SPV4.1. This allows authorized components in up to eight logical partitions ("[[LPARlogical partition]]s" (LPARs) to communicate and cooperate with each other using the [[XCFIBM protocolXCF|XCF]] protocol.
 
Components of a Sysplex include:
Line 28:
==Parallel Sysplex==
[[File:GDPS.svg|thumb|300px|Schematic representation of a Parallel Sysplex]]
IBM introduced<ref>{{cite web
The Parallel Sysplex was introduced with the addition of the [[Coupling Facility]] (CF) with coupling links for high speed communication, with MVS/ESA V5.1 operating system support, together with the mainframe models in April 1994.<ref>{{cite web |url=http://www.redbooks.ibm.com/redbooks/pdfs/sg244356.pdf |title=Archived copy |access-date=2007-09-17 |url-status=dead |archive-url=https://web.archive.org/web/20110518132944/http://www.redbooks.ibm.com/redbooks/pdfs/sg244356.pdf |archive-date=2011-05-18 |df= }} System/390 Parallel Sysplex Performance - IBM Redbook. Retrieved 17-09-2007.</ref>
| title = S/390 Parallel Sysplex Overview
| id = 194-080
| date = April 6, 1994
| url = https://www.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/0/897/ENUS194-080/index.html
| work = Announcement Letters
| publisher = IBM
}}
</ref> the Parallel Sysplex with the addition of the 9674<ref>{{cite web
| title = IBM S/390 Coupling Facility 9674 Model C01
| id = 194-082
| date = April 6, 1994
| url = https://www.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/2/897/ENUS194-082/index.html
| work = Announcement Letters
| publisher = IBM
}}
</ref> [[Coupling Facility]] (CF), new S/390 models,<ref>{{cite web
| title = S/390 Parallel Sysplex Offering
| id = 194-081
| date = April 6, 1994
| url = https://www.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/1/897/ENUS194-081/index.html
| work = Announcement Letters
| publisher = IBM
}}
</ref><ref>{{cite web
| title = IBM ES/9000 Water-Cooled Processor Enhancements: New Ten-Way Processor, Parallel Sysplex Capability, and Additional Functions
| id = 194-084
| date = April 6, 1994
| url = https://www.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/4/897/ENUS194-084/index.html
| work = Announcement Letters
| publisher = IBM
}}
</ref><ref>{{cite web
| title = IBM Enterprise System/9000 Air-Cooled Processors Enhanced with Additional Functions and Parallel Sysplex Capability
| id = 194-084
| date = April 6, 1994
| url = https://www.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/5/897/ENUS194-085/index.html
| work = Announcement Letters
| publisher = IBM
}}
</ref> upgrades to existing models, coupling links for high speed communication and MVS/ESA SP V5.1<ref>{{cite web
| title = IBM MVS/ESA SP Version 5 Release 1 and OpenEdition Enhancements
| id = 294-152
| date = April 6, 1994
| url = https://www.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/2/897/ENUS294-152/index.html
| work = Announcement Letters
| publisher = IBM
}}
</ref> operating system support, in April 1994.<ref>{{cite book
| title = System/390 Parallel Sysplex Performance
| id = SG24-4356-03
| date = December 1998
| edition = Fourth
| url = http://www.redbooks.ibm.com/redbooks/pdfs/sg244356.pdf
| publisher = International Business Machines Corporation
| access-date = 2007-09-17
| url-status = dead
| archive-url = https://web.archive.org/web/20110518132944/http://www.redbooks.ibm.com/redbooks/pdfs/sg244356.pdf
| archive-date = 2011-05-18
}}
</ref>
 
The Coupling Facility (CF) may reside on a dedicated stand-alone server configured with processors that can run Coupling Facility control code (CFCC), as integral processors on the mainframes themselves configured as ICFs (Internal Coupling Facilities), or less common, as normal LPARs. The CF contains Lock, List, and Cache structures to help with serialization, message passing, and buffer consistency between multiple LPARs.<ref>http://www.ibm.com/common/ssi/fcgi-bin/ssialias?infotype=SA&subtype{{cite web
| title =WH&attachment=ZSW01971USEN.PDF&appname=STGE_ZS_ZS_USEN&htmlfid=ZSW01971USEN Coupling Facility Configuration Options .</ref>
| id = ZSW01971USEN
| author = David Raften
| date = November 2019
| publisher = IBM
| work = Positioning paper
| url = http://www.ibm.com/common/ssi/fcgi-bin/ssialias?infotype=SA&subtype=WH&attachment=ZSW01971USEN.PDF&appname=STGE_ZS_ZS_USEN&htmlfid=ZSW01971USEN
}}
</ref>
 
The primary goal of a Parallel Sysplex is to provide data sharing capabilities, allowing multiple databases for direct reads and writes to shared data. This can provide benefits of
Line 41 ⟶ 110:
 
Databases running on the System z server that can take advantage of this include:
* [[IBM DB2|DB2Db2]]
* [[IBM Information Management System]] (IMS).
* [[Virtual Storage Access Method|VSAM]] (VSARM/RLS)
Line 67 ⟶ 136:
Major components of a Parallel Sysplex include:
* [[Coupling Facility]] (CF or ICF) hardware, allowing multiple processors to share, cache, update, and balance data access;
* Sysplex Timers or(more recently, Server Time Protocol) to synchronize the clocks of all member systems;
* High speed, high quality, redundant cabling;
* Software ([[operating system]] services and, usually, [[middleware]] such as [[IBM DB2|DB2Db2]]).
The Coupling Facility may be either a dedicated external system (a small mainframe, such as a [[System z9]] BC, specially configured with only coupling facility processors) or integral processors on the mainframes themselves configured as ICFs (Internal Coupling Facilities).<ref>{{cite web |url=https://www.pcmag.com/encyclopedia_term/0,2542,t=Coupling%2C2542%2Ct%3DCoupling+Facility&i=40413,00%26i%3D40413%2C00.asp |title=Coupling Facility Definition |publisher=PC Magazine.com |accessdateaccess-date=April 13, 2009 |archive-date=December 2, 2008 |archive-url=https://web.archive.org/web/20081202161800/http://www.pcmag.com/encyclopedia_term/0%2C2542%2Ct%3DCoupling+Facility%26i%3D40413%2C00.asp |url-status=dead }}</ref> It is recommended that at least one external CF be used in a parallel sysplex.<ref>{{cite web |url=http://www-ti.informatik.uni-tuebingen.de/os390/sysplex/sysplex/couplfac.pdf |title=Coupling Facility |accessdateaccess-date=April 13, 2009 |archive-date=July 17, 2011 |archive-url=https://web.archive.org/web/20110717185607/http://www-ti.informatik.uni-tuebingen.de/os390/sysplex/sysplex/couplfac.pdf |url-status=dead }}</ref> It is recommended that a Parallel Sysplex has at least two CFs and/or ICFs for redundancy, especially in a production data sharing environment. Server Time Protocol (STP) replaced the Sysplex Timers beginning in 2005 for System z mainframe models z990 and newer.<ref>{{cite web |title=Migrate from a Sysplex Timer to STP |url=http://publib.boulder.ibm.com/infocenter/zos/v1r9/index.jsp?topic=/com.ibm.zos.r9.e0zm100/sttostp.htm |publisher=IBM |accessdateaccess-date=April 15, 2009 }}</ref> A Sysplex Timer is a physically separate piece of hardware from the mainframe,<ref>{{cite web |title=Sysplex Timer |url=http://www.symmetricom.com/resources/compliance-certifications/sysplex-timer/ |publisher=Symmetricom |accessdateaccess-date=April 15, 2009 }}</ref> whereas STP is an integral facility within the mainframe's microcode.<ref>{{cite web |title=IBM Server Time Protocol (STP) |url=http://www-03.ibm.com/systems/z/advantages/pso/stp.html |archive-url=https://web.archive.org/web/20080613095316/http://www-03.ibm.com/systems/z/advantages/pso/stp.html |url-status=dead |archive-date=June 13, 2008 |publisher=IBM |accessdateaccess-date=April 15, 2009 }}</ref>
With STP and ICFs it is possible to construct a complete Parallel Sysplex installation with two connected mainframes. Moreover, a single mainframe can contain the internal equivalent of a complete physical Parallel Sysplex, useful for application testing and development purposes.<ref>{{cite web |url=http://www.zjournal.com/index.cfm?section=article&aid=308 |title=MVS Boot Camp: IBM Health Checker |first=John E. |last=Johnson |publisher=z/Journal |accessdateaccess-date=April 15, 2009 }}{{dead link|date=January 2018 |bot=InternetArchiveBot |fix-attempted=yes }}</ref>
 
The IBM Systems Journal dedicated a full issue to all the technology components.<ref>{{cite web |url=http://researchweb.watson.ibm.com/journal/sj36-2.html |title=IBM's System Journal on S/390 Parallel Sysplex Clusters |accessdateaccess-date=24 April 2017 |archive-date=9 March 2012 |archive-url=https://web.archive.org/web/20120309150534/http://researchweb.watson.ibm.com/journal/sj36-2.html |url-status=dead }}</ref>
 
==Server Time Protocol==
Maintaining accurate time is important in computer systems. For example, in a transaction-processing system the recovery process reconstructs the transaction data from log files. If time stamps are used for transaction-data logging, and the time stamps of two related transactions are transposed from the actual sequence, then the reconstruction of the transaction database may not match the state before the recovery process.
Server Time Protocol (STP) can be used to provide a single time source between multiple servers. Based on Network Time Protocol concepts, one of the System z servers is designated by the HMC as the primary time source (Stratum 1). It then sends timing signals to the Stratum 2 servers through use of coupling links. The Stratum 2 servers in turn send timing signals to the Stratum 3 servers. To provide availability, one of the servers can be designated as a backup time source, and a third server can be designated as an “Arbiter”Arbiter to assist the Backup Time Server in determining if it should take the role of the Primary during exception conditions.
 
STP has been available on System z servers since 2005.
 
More information on STP is available in “Server Time Protocol Planning Guide”.<ref>http://www.redbooks.ibm.com/abstracts/sg247280.html?Open{{cite Server Time Protocol Planning Guide.</ref>manual
| title = Server Time Protocol Planning Guide
| id = SG24-7280-03
| date = June 2013
| edition = Fourth
| work = Redbooks
| publisher = International Business Machines Corporation
| url = http://www.redbooks.ibm.com/redbooks/pdfs/sg247280.pdf
}}
</ref>
 
==Geographically Dispersed Parallel Sysplex==
{{redirect|GDPS|other uses|GDPS (disambiguation)}}
'''Geographically Dispersed Parallel Sysplex''' ('''GDPS''') is an extension of Parallel Sysplex of mainframes located, potentially, in different cities. GDPS includes configurations for single site or multiple site configurations:<ref>{{cite conference |first=Riaz |last=Ahmad |date=March 5, 2009 |title=GDPS 3.6 Update & Implementation |publisher=SHARE |___location=Austin, TX |url=http://ew.share.org/proceedingmod/abstract.cfm?abstract_id=19145 |accessdateaccess-date=April 17, 2009 }}{{Dead link|date=January 2020 |bot=InternetArchiveBot |fix-attempted=yes }}</ref>
* GDPS HyperSwap Manager: This is based on synchronous [[Peer to Peer Remote Copy]] (PPRC) technology for use within a single data center. Data is copied from the primary storage device to a secondary storage device. In the event of a failure on the primary storage device, the system automatically makes the secondary storage device the primary, usually without disrupting running applications.
* GDPS Metro: This is based on synchronous data mirroring technology (PPRC) that can be used on mainframes {{convert|200|km|mi}} apart. In a two-system model, both sites can be administered as if they were one system. In the event of a failure of a system or storage device, recovery can occur automatically, with limited or no data loss.
Line 92 ⟶ 170:
* GDPS Metro Global - GM: This is a configuration for systems with more than two systems/sites, for purposes of disaster recovery. It is based on GDPS Metro together with GDPS Global - GM.
* GDPS Metro Global - XRC: This is a configuration for systems with more than two systems/sites for purposes of disaster recovery. It is based on GDPS Metro together with GDPS Global - XRC.
* GDPS Continuous Availability: This is a disaster recovery / continuous availability solution, based on two or more sites, separated by unlimited distances, running the same applications and having the same data to provide cross-site workload balancing. IBM Multi-site Workload Lifeline, through its monitoring and workload routing, plays an integral role in the GDPS Continuous Availability solution.
 
==See also==