Distributed file system for cloud: Difference between revisions

Content deleted Content added
Bibliography: Resolving Category:Harv and Sfn no-target errors. Changed author list to support SFRs
m Reverted edit by 154.133.103.135 (talk) to last version by JCW-CleanerBot
Tags: Rollback Mobile edit Mobile web edit
 
(24 intermediate revisions by 13 users not shown)
Line 7:
 
== Overview ==
 
=== History ===
Today, there are many implementations of distributed file systems. The first file servers were developed by researchers in the 1970s. Sun Microsystem's [[Network File System]] became available in the 1980s. Before that, people who wanted to share files used the [[sneakernet]] method, physically transporting files on storage media from place to place. Once computer networks started to proliferate, it became obvious that the existing file systems had many limitations and were unsuitable for multi-user environments. Users initially used [[FTP]] to share files.<ref>{{harvnb|Sun microsystem|p=1}}</ref> FTP first ran on the [[PDP-10]] at the end of 1973. Even with FTP, files needed to be copied from the source computer onto a server and then from the server onto the destination computer. Users were required to know the physical addresses of all computers involved with the file sharing.<ref>{{harvnb|Fabio Kon|1996|p=1}}</ref>
 
=== Supporting techniques ===
Line 23 ⟶ 22:
 
=== Client-server architecture ===
[[Network File System]] (NFS) uses a [[client-server architecture]], which allows sharing of files between a number of machines on a network as if they were located locally, providing a standardized view. The NFS protocol allows heterogeneous clients' processes, probably running on different machines and under different operating systems, to access files on a distant server, ignoring the actual ___location of files. Relying on a single server results in the NFS protocol suffering from potentially low availability and poor scalability. Using multiple servers does not solve the availability problem since each server is working independently.<ref>{{harvnb|Di Sano| Di Stefano|Morana|Zito|2012|p=2}}</ref> The model of NFS is a remote file service. This model is also called the remote access model, which is in contrast with the upload/download model:
* Remote access model: Provides transparency, the client has access to a file. He sendsends requests to the remote file (while the file remains on the server).<ref>{{harvnb|Andrew|Maarten|2006|p=492}}</ref>
* Upload/download model: The client can access the file only locally. It means that the client has to download the file, make modifications, and upload it again, to be used by others' clients.
 
Line 33 ⟶ 32:
 
==== Design principles ====
 
===== Goals =====
[[Google File System]] (GFS) and [[Hadoop Distributed File System]] (HDFS) are specifically built for handling [[batch processing]] on very large data sets.
Line 44 ⟶ 42:
 
===== Load balancing =====
[[Load balancing (computing)|Load balancing]] is essential for efficient operation in distributed environments. It means distributing work among different servers,<ref>{{harvnb|Kai|Dayang|Hui|Yintang|2013|p=23}}</ref> fairly, in order to get more work done in the same amount of time and to serve clients faster. In a system containing N chunkservers in a cloud (N being 1000, 10000, or more), where a certain number of files are stored, each file is split into several parts or chunks of fixed size (for example, 64 megabytes), the load of each chunkserver being proportional to the number of chunks hosted by the server.<ref name="ReferenceA">{{harvnb|Hsiao|Chung|Shen|Chao|2013|p=2}}</ref> In a load-balanced cloud, resources can be efficiently used while maximizing the performance of MapReduce-based applications.
 
Load balancing is essential for efficient operation in distributed environments. It means distributing work among different servers,<ref>{{harvnb|Kai|Dayang|Hui|Yintang|2013|p=23}}</ref> fairly, in order to get more work done in the same amount of time and to serve clients faster. In a system containing N chunkservers in a cloud (N being 1000, 10000, or more), where a certain number of files are stored, each file is split into several parts or chunks of fixed size (for example, 64 megabytes), the load of each chunkserver being proportional to the number of chunks hosted by the server.<ref name="ReferenceA">{{harvnb|Hsiao|Chung|Shen|Chao|2013|p=2}}</ref> In a load-balanced cloud, resources can be efficiently used while maximizing the performance of MapReduce-based applications.
 
===== Load rebalancing =====
 
In a cloud computing environment, failure is the norm,<ref>{{harvnb|Hsiao|Chung|Shen|Chao|2013|p=952}}</ref><ref>{{harvnb|Ghemawat|Gobioff|Leung|2003|p=1}}</ref> and chunkservers may be upgraded, replaced, and added to the system. Files can also be dynamically created, deleted, and appended. That leads to load imbalance in a distributed file system, meaning that the file chunks are not distributed equitably between the servers.
 
Distributed file systems in clouds such as GFS and HDFS rely on central or master servers or nodes (Master for GFS and NameNode for HDFS) to manage the metadata and the load balancing. The master rebalances replicas periodically: data must be moved from one DataNode/chunkserver to another if free space on the first server falls below a certain threshold.<ref>{{harvnb|Ghemawat|Gobioff|Leung|2003|p=8}}</ref> However, this centralized approach can become a bottleneck for those master servers, if they become unable to manage a large number of file accesses, as it increases their already heavy loads. The load rebalance problem is [[w:NP-hard|NP-hard]].<ref>{{harvnb|Hsiao|Chung|Shen|Chao|2013|p=953}}</ref>
 
In order to get a large number of chunkservers to work in collaboration, and to solve the problem of load balancing in distributed file systems, several approaches have been proposed, such as reallocating file chunks so that the chunks can be distributed as uniformly as possible while reducing the movement cost as much as possible.<ref name="ReferenceA" />
 
==== Google file system ====
{{Cat main|Google File System}}
 
===== Description =====
Line 64 ⟶ 60:
 
The master server running in dedicated node is responsible for coordinating storage resources and managing files's [[metadata]] (the equivalent of, for example, inodes in classical file systems).<ref name="Krzyzanowski_p2">{{harvnb|Krzyzanowski|2012|p=2}}</ref>
Each file is split tointo multiple chunks of 64 megabytes. Each chunk is stored in a chunk server. A chunk is identified by a chunk handle, which is a globally unique 64-bit number that is assigned by the master when the chunk is first created.
 
The master maintains all of the files's metadata, including file names, directories, and the mapping of files to the list of chunks that contain each file's data. The metadata is kept in the master server's main memory, along with the mapping of files to chunks. Updates to this data are logged to an operation log on disk. This operation log is replicated onto remote machines. When the log becomebecomes too large, a checkpoint is made and the main-memory data is stored in a [[B-tree]] structure to facilitate mapping back into the main memory.<ref>{{harvnb|Krzyzanowski|2012|p=4}}</ref>
 
===== Fault tolerance =====
Line 86 ⟶ 82:
 
==== Hadoop distributed file system ====
{{Cat main|Apache Hadoop}}
 
{{abbr|HDFS |Hadoop Distributed File System}}, developed by the [[Apache Software Foundation]], is a distributed file system designed to hold very large amounts of data (terabytes or even petabytes). Its architecture is similar to GFS, i.e. a masterserver/slaveclient architecture. The HDFS is normally installed on a cluster of computers.
The design concept of Hadoop is informed by Google's, with Google File System, Google MapReduce and [[Bigtable]], being implemented by Hadoop Distributed File System (HDFS), Hadoop MapReduce, and Hadoop Base (HBase) respectively.<ref>{{harvnb|Fan-Hsun|Chi-Yuan| Li-Der| Han-Chieh|2012|p=2}}</ref> Like GFS, HDFS is suited for scenarios with write-once-read-many file access, and supports file appends and truncates in lieu of random reads and writes to simplify data coherency issues.<ref>{{Cite web | url=http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html#Assumptions_and_Goals | title=Apache Hadoop 2.9.2 – HDFS Architecture}}</ref>
 
Line 104 ⟶ 100:
Distributed file systems can be optimized for different purposes. Some, such as those designed for internet services, including GFS, are optimized for scalability. Other designs for distributed file systems support performance-intensive applications usually executed in parallel.<ref>{{harvnb|Soares| Dantas†|de Macedo|Bauer|2013|p=158}}</ref> Some examples include: [[MapR FS|MapR File System]] (MapR-FS), [[Ceph (storage)|Ceph-FS]], [[BeeGFS|Fraunhofer File System (BeeGFS)]], [[Lustre (file system)|Lustre File System]], [[IBM General Parallel File System]] (GPFS), and [[Parallel Virtual File System]].
 
MapR-FS is a distributed file system that is the basis of the MapR Converged Platform, with capabilities for distributed file storage, a NoSQL database with multiple APIs, and an integrated message streaming system. MapR-FS is optimized for scalability, performance, reliability, and availability. Its file storage capability is compatible with the Apache Hadoop Distributed File System (HDFS) API but with several design characteristics that distinguish it from HDFS. Among the most notable differences are that MapR-FS is a fully read/write filesystem with metadata for files and directories distributed across the namespace, so there is no NameNode.<ref name="mapr-productivity">{{cite web|last1=Perez|first1=Nicolas|title=How MapR improves our productivity and simplifies our design|url=https://medium.com/@anicolaspp/how-mapr-improves-our-productivity-and-simplify-our-design-2d777ab53120#.mvr6mmydr|website=Medium|publisher=Medium|access-date=June 21, 2016|date=2016-01-02}}</ref><ref>{{cite web|last1=Woodie|first1=Alex|title=From Hadoop to Zeta: Inside MapR's Convergence Conversion|url=http://www.datanami.com/2016/03/08/from-hadoop-to-zeta-inside-maprs-convergence-conversion/|website=Datanami|publisher=Tabor Communications Inc.|access-date=June 21, 2016|date=2016-03-08}}</ref><ref>{{cite web|last1=Brennan|first1=Bob|title=Flash Memory Summit|url=https://www.youtube.com/watch?v=fOT63zR7PvU&t=1682|website=youtube|publisher=Samsung|access-date=June 21, 2016}}</ref><ref name="maprfs-video">{{cite web|last1=Srivas|first1=MC|title=MapR File System|url=https://www.youtube.com/watch?v=fP4HnvZmpZI|website=Hadoop Summit 2011|date=23 July 2011 |publisher=Hortonworks|access-date=June 21, 2016}}</ref><ref name="real-world-hadoop">{{cite book|last1=Dunning|first1=Ted|last2=Friedman|first2=Ellen|title=Real World Hadoop|date=January 2015|publisher=O'Reilly Media, Inc|___location=Sebastopol, CA|isbn=978-1-4919-2395-5|pages=23–28|edition=First|chapter-url=http://shop.oreilly.com/product/0636920038450.do|access-date=June 21, 2016|language=en|chapter=Chapter 3: Understanding the MapR Distribution for Apache Hadoop}}</ref>
 
Ceph-FS is a distributed file system that provides excellent performance and reliability.<ref>{{harvnb|Weil|Brandt|Miller|Long|2006|p=307}}</ref> It answers the challenges of dealing with huge files and directories, coordinating the activity of thousands of disks, providing parallel access to metadata on a massive scale, manipulating both scientific and general-purpose workloads, authenticating and encrypting on a large scale, and increasing or decreasing dynamically due to frequent device decommissioning, device failures, and cluster expansions.<ref>{{harvnb|Maltzahn|Molina-Estolano|Khurana|Nelson|2010|p=39}}</ref>
Line 189 ⟶ 185:
== Bibliography ==
* {{cite book
| last1 = Andrew
| first1 = S.Tanenbaum
| last2 = Maarten
| first2 = Van Steen
| year = 2006
| title = Distributed systems principles and paradigms
| url = http://net.pku.edu.cn/~course/cs501/2011/resource/2006-Book-distributed%20systems%20principles%20and%20paradigms%202nd%20edition.pdf
| access-date = 2014-01-10
}}
| archive-date = 2013-08-20
* {{cite journal
| archive-url = https://web.archive.org/web/20130820190519/http://net.pku.edu.cn/~course/cs501/2011/resource/2006-Book-distributed%20systems%20principles%20and%20paradigms%202nd%20edition.pdf
| first = Fabio |last=Kon
| url-status = dead
| title =Distributed File Systems, The State of the Art and concept of Ph.D. Thesis
}}
| citeseerx = 10.1.1.42.4609
* {{cite web
| first = Fabio |last = Kon
| title = Distributed File Systems Past, Present and Future: A Distributed File System for 2006
| url = https://www.researchgate.net/publication/2439179
| year = 1996
| website = [[ResearchGate]]
}}
* {{cite web
Line 213 ⟶ 216:
* {{cite web
| last1 = Jacobi
| first1 = Tim-Daniel
| last2 = Lingemann
| first2 = Jan
| url = http://wr.informatik.uni-hamburg.de/_media/research/labs/2012/2012-10-tim-daniel_jacobi_jan_lingemann-evaluation_of_distributed_file_systems-report.pdf
| title = Evaluation of Distributed File Systems
| access-date = 2014-01-24
}}
| archive-date = 2014-02-03
| archive-url = https://web.archive.org/web/20140203140412/http://wr.informatik.uni-hamburg.de/_media/research/labs/2012/2012-10-tim-daniel_jacobi_jan_lingemann-evaluation_of_distributed_file_systems-report.pdf
| url-status = dead
}}
# Architecture, structure, and design:
#* {{cite book
Line 233 ⟶ 240:
| year = 2012
| doi = 10.1109/ClusterW.2012.27
| others = Coll. of Comput. Sci. & Technol., Zhejiang Univ., Hangzhou, China
| s2cid = 12430485
| chapter = A Novel Scalable Architecture of Cloud Storage System for Small Files Based on P2P
Line 244 ⟶ 250:
| year = 2013
| doi = 10.1109/CTS.2013.6567222
| others = Information and Computer Science Department King Fahd University of Petroleum and Minerals
| s2cid = 45293053
| pages = 155–161
Line 256 ⟶ 261:
| year = 2012
| url = http://www.cs.rutgers.edu/~pxk/417/notes/16-dfs.pdf
| access-date = 2013-12-27
}}
| archive-date = 2013-12-27
| archive-url = https://web.archive.org/web/20131227152320/http://www.cs.rutgers.edu/~pxk/417/notes/16-dfs.pdf
| url-status = dead
}}
#* {{cite conference
| last1 = Kobayashi | first1 = K
Line 265 ⟶ 274:
| title = The Gfarm File System on Compute Clouds
| conference = Parallel and Distributed Processing Workshops and Phd Forum (IPDPSW), 2011 IEEE International Symposium on
| conference-url = httphttps://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=conhome/6008655/proceeding
| doi = 10.1109/IPDPS.2011.255
| others = Grad. Sch. of Syst. & Inf. Eng., Univ. of Tsukuba, Tsukuba, Japan
}}
#* {{cite book
Line 275 ⟶ 283:
| year = 2012
| doi = 10.1109/ICAICT.2012.6398489
| others = Department of Computer Engineering Qafqaz University Baku, Azerbaijan
| s2cid = 6113112
| pages = 1–5
Line 291 ⟶ 298:
| first4 =Yu-Chang
| title = Load Rebalancing for Distributed File Systems in Clouds
| periodicaljournal = IEEE Transactions on Parallel and Distributed Systems, IEEE Transactions on
| year = 2013
| doi = 10.1109/TPDS.2012.196
| others = National Cheng Kung University, Tainan
| s2cid = 11271386
| pages = 951–962
Line 312 ⟶ 318:
| year = 2013
| doi = 10.1109/INCoS.2013.14
| others = State Key Lab. of Integrated Service Networks, Xidian Univ., Xi'an, China
| s2cid = 14821266
| pages = 23–29
Line 334 ⟶ 339:
| year = 2008
| doi = 10.1109/NCM.2008.164
| others = Sch. of Bus. IT, Kookmin Univ., Seoul
| s2cid = 18933772
| pages = 400–405
Line 352 ⟶ 356:
| year = 2013
| doi = 10.1109/WETICE.2013.12
| others = nf. & Statistic Dept. (INE), Fed. Univ. of Santa Catarina (UFSC), Florianopolis, Brazil
| s2cid = 6155753
| pages = 158–163
Line 364 ⟶ 367:
| year = 2012
| doi = 10.1109/ICAICT.2012.6398484
| s2cid = 16674289
| others = Comput. Eng. Dept., Qafqaz Univ., Baku, Azerbaijan
| s2cid = 16674289
| pages = 1–3
| chapter = Distributed file system as a basis of data-intensive computing
Line 376 ⟶ 378:
| year = 2003
| url = https://www.kernel.org/doc/ols/2003/ols2003-pages-380-386.pdf
| pages = 400–407
| others = Cluster File Systems, Inc.
| pages = 400–407
}}
#* {{cite journal
| last1 = Jones
|first1 = Terry
| last2 = Koniges
|first2 = Alice
|last3 = Yates
|first3 = R. Kim
| title = Performance of the IBM General Parallel File System
| periodical = Parallel and Distributed Processing Symposium, 2000. IPDPS 2000. Proceedings. 14th International
| url = https://computing.llnl.gov/code/sio/GPFS_performance.pdf
|year = 2000
|access-date = 2014-01-24
| others = Lawrence Livermore National Laboratory
|archive-date = 2013-02-26
|archive-url = https://web.archive.org/web/20130226053255/https://computing.llnl.gov/code/sio/GPFS_performance.pdf
|url-status = dead
}}
#* {{cite journalconference
| last1 = Weil
| first1 = Sage A.
| last2 = Brandt
| first2 = Scott A.
| last3 = Miller
| first3 = Ethan L.
| last4 = Long
| first4 = Darrell D. E.
| title = Ceph: A Scalable, High-Performance Distributed File System
| year = 2006
| url = http://www.ssrc.ucsc.edu/Papers/weil-osdi06.pdf
|conference = Proceedings of the 7th Conference on Operating Systems Design and Implementation (OSDI '06)
| others = University of California, Santa Cruz
|access-date = 2014-01-24
|archive-date = 2012-03-09
|archive-url = https://web.archive.org/web/20120309021423/http://www.ssrc.ucsc.edu/Papers/weil-osdi06.pdf
|url-status = dead
}}
#* {{cite journalreport
| last1 = Maltzahn
| first1 = Carlos
Line 416 ⟶ 424:
| first4= Alex J.
| last5 = Brandt
| first5= Scott A.
| last6=Weil
| first6=Sage
| title =Ceph as a scalable alternative to the Hadoop Distributed FileSystem
| year = 2010
Line 435 ⟶ 443:
| year = 2003
| doi = 10.1109/MASS.2003.1194865
| pages = 290–298
| others = Storage Syst. Res. Center, California Univ., Santa Cruz, CA, USA
| pages = 290–298
| chapter = Efficient metadata management in large distributed storage systems
| isbn = 978-0-7695-1914-2
Line 492 ⟶ 499:
| year = 2011
| doi = 10.1109/SWS.2011.6101263
| s2cid = 14791637
| others = PCN&CAD Center, Beijing Univ. of Posts & Telecommun., Beijing, China
| s2cid = 14791637
| pages = 16–20
| chapter = A carrier-grade service-oriented file storage architecture for cloud computing
Line 512 ⟶ 518:
| isbn = 978-1-58113-757-6
| s2cid =221261373
| chapter-url = https://www.semanticscholar.org/paper/7b56847e641168aed58f3603bc00af84d414c9aa
}}
# Security
Line 525 ⟶ 530:
| year = 2009
| doi = 10.1109/I-SPAN.2009.150
| pages = 4–16
| others = Dept. of Comput. Sci. & Software Eng., Univ. of Melbourne, Melbourne, VIC, Australia
| pages = 4–16
| chapter = High-Performance Cloud Computing: A View of Scientific Applications
| isbn = 978-1-4244-5403-7
Line 568 ⟶ 572:
| year = 2012
| doi = 10.1109/MIC.2012.6273264
| s2cid = 40685246
| others = Comput. Coll., Northwestern Polytech. Univ., Xi'An, China
| s2cid = 40685246
| pages = 327–331
| chapter = PsFS: A high-throughput parallel file system for secure Cloud Storage system
Line 583 ⟶ 586:
| last4 = Xue
| first4 = Lan
| title = Efficient Metadata Management in Large Distributed Storage Systems
| periodical = 11th NASA Goddard Conference on Mass Storage Systems and Technologies, San Diego, CA
| year = 2003
| url = http://www.ssrc.ucsc.edu/Papers/brandt-mss03.pdf
| access-date = 2013-12-27
| others = Storage Systems Research Center University of California, Santa Cruz
| archive-date = 2013-08-22
}}
| archive-url = https://web.archive.org/web/20130822213717/http://www.ssrc.ucsc.edu/Papers/brandt-mss03.pdf
| url-status = dead
}}
#* {{cite journal
| author = Lori M. Kaufman
| s2cid = 16233643
| title =Data Security in the World of Cloud Computing
| periodicaljournal = IEEE Security & Privacy, IEEE
| year = 2009
| doi = 10.1109/MSP.2009.87
Line 607 ⟶ 613:
| last3 = Oprea
| first3 =Alina
| title = Proceedings of the 16th ACM conference on Computer and communications security
| chapter = HAIL: A high-availability and integrity layer for cloud storage
| s2cid = 207176701
| title = HAIL: a high-availability and integrity layer for cloud storageComputing
| periodical = Proceedings of the 16th ACM Conference on Computer and Communications Security
| year = 2009
| doi = 10.1145/1653662.1653686
Line 638 ⟶ 644:
| year = 2012
| doi = 10.1109/Grid.2012.17
| s2cid = 10778240
| others = Dept. of Comput. Sci., Hefei Univ. of Technol., Hefei, China
| s2cid = 10778240
| pages = 12–21
| chapter = A Distributed Cache for Hadoop Distributed File System in Real-Time Cloud Services
Line 658 ⟶ 663:
| year = 2012
| doi = 10.1109/SC.Companion.2012.103
| s2cid = 5554936
| others = Dept. of Electr. & Comput. Eng., Purdue Univ., West Lafayette, IN, USA
| s2cid = 5554936
| pages = 753–759
| chapter = Integrating High Performance File Systems in a Cloud Computing Environment
Line 676 ⟶ 680:
| year = 2012
| doi = 10.1109/ISPACS.2012.6473485
| s2cid = 18260943
| others = Dept. of Comput. Sci. & Inf. Eng., Nat. Central Univ., Taoyuan, Taiwan
| s2cid = 18260943
| pages = 227–232
| chapter = Implement a reliable and secure cloud distributed file system
Line 694 ⟶ 697:
| year = 2012
| doi = 10.1109/WETICE.2012.104
| s2cid = 19798809
| others = Dept. of Electr., Electron. & Comput. Eng., Univ. of Catania, Catania, Italy
| s2cid = 19798809
| pages = 173–178
| chapter = File System As-a-Service: Providing Transient and Consistent Views of Files to Cooperating Applications in Clouds
Line 721 ⟶ 723:
| year = 2008
| url = http://www.pewinternet.org/~/media//Files/Reports/2008/PIP_Cloud.Memo.pdf.pdf
| access-date = 2013-12-27
}}
| archive-date = 2013-07-12
| archive-url = https://web.archive.org/web/20130712182757/http://www.pewinternet.org/~/media//Files/Reports/2008/PIP_Cloud.Memo.pdf.pdf
| url-status = dead
}}
#* {{cite journal
| last1 = Yau
Line 742 ⟶ 748:
| last4 = Gibson
| first4 = Garth
| title = Proceedings of the 4th Annual Workshop on Petascale Data Storage
| chapter = DiskReduce: RAID for data-intensive scalable computing
| s2cid = 15194567
| title = DiskReduce: RAID for data-intensive scalable computing
| year = 2009
| doi = 10.1145/1713072.1713075
| pages = 6–10
| isbn = 978-1-60558-883-4
| chapter = Disk ''Reduce''
| isbn = 978-1-60558-883-4
}}
#* {{cite book
Line 774 ⟶ 780:
| last3 = Weatherspoon
| first3 = Hakim
| title = Proceedings of the 1st ACM symposium on Cloud computing
| chapter = RACS: A case for cloud storage diversity
| s2cid = 1283873
| title = RACS: a case for cloud storage diversity
| periodical = SoCC '10 Proceedings of the 1st ACM Symposium on Cloud Computing
| year = 2010
| doi = 10.1145/1807128.1807165
Line 824 ⟶ 830:
| isbn = 978-1-4577-1904-2
}}
#* {{cite journalbook
| last1 = Qian
| first1 = Haiyang
Line 831 ⟶ 837:
| last3 = T.
| first3 = Trivedi
| title = 12th IFIP/IEEE International Symposium on Integrated Network Management (IM 2011) and Workshops
| title = A hierarchical model to evaluate quality of experience of online services hosted by cloud computing
| chapter = A hierarchical model to evaluate quality of experience of online services hosted by cloud computing
| year = 2011
| doi = 10.1109/INM.2011.5990680
| pages = 105–112
| journal=Communications of the ACM| volume = 52 |number= 1
| isbn = 978-1-4244-9219-0
| citeseerx = 10.1.1.190.5148
| citeseerx = 10.1.1.190.5148
| s2cid = 15912111
}}
Line 861 ⟶ 869:
| chapter = Provable data possession at untrusted stores
| isbn = 978-1-59593-703-2
| url = https://figshare.com/articles/journal_contribution/6469184
}}
#* {{cite book
Line 902 ⟶ 911:
| last2 = S. Kaliski
| first2 = Burton
| title = Proceedings of the 14th ACM conference on Computer and communications security
| chapter = Pors: Proofs of retrievability for large files
| s2cid = 6032317
| title = Pors: proofs of retrievability for large files
| periodical = Proceedings of the 14th ACM Conference on Computer and Communications
| year = 2007
| doi = 10.1145/1315245.1315317
Line 940 ⟶ 949:
| journal=Proceedings of the VLDB Endowment | volume = 2 |issue= 1|doi=10.14778/1687627.1687657
}}
#* {{cite journalreport
| last1 = Daniel
| first1 = J. Abadi
Line 985 ⟶ 994:
| chapter = Provable data possession at untrusted stores
| isbn = 978-1-59593-703-2
| url = https://figshare.com/articles/journal_contribution/6469184
}}
# Synchronization
Line 998 ⟶ 1,008:
| doi = 10.1109/CLUSTERWKSP.2010.5613087
| pages = 1–4
| others =Inst. of Comput. Sci. (ICS), Found. for Res. & Technol. - Hellas (FORTH), Heraklion, Greece
| s2cid = 14577793
| chapter = Cloud-based synchronization of distributed file system hierarchies
Line 1,009 ⟶ 1,018:
| s2cid = 16233643
| title = Data Security in the World of Cloud Computing
| periodicaljournal = IEEE Security & Privacy, IEEE
| year = 2009
| doi = 10.1109/MSP.2009.87
Line 1,045 ⟶ 1,054:
| year = 2011
| doi = 10.1109/3PGCIC.2011.37
| s2cid = 13393620
|others= Sch. of Electr. & Comput. Eng., Univ. of Tehran, Tehran, Iran
| s2cid = 13393620
| pages =193–199
| chapter = Suitability of Cloud Computing for Scientific Data Analyzing Applications; an Empirical Study
Line 1,055 ⟶ 1,063:
 
[[Category:Cloud storage]]
[[Category:Cloud computing]]