Content deleted Content added
m Checkwiki + General Fixes, added orphan, uncategorised tags using AWB |
|||
Line 1:
{{Orphan|date=December 2013}}
'''Distributed file system in cloud ''' is a file system that allows many clients to have access to the same data/file providing important operations (create, delete, modify, read, write). Each file may be partitioned into several parts called chunks. Each chunk is stored in remote machines.Typically, data is stored in files in a hierarchical tree where the nodes represent the directories. Hence, it facilitates the parallel execution of applications. There are several ways to share files in a distributed architecture. Each solution must be suitable for a certain type of application relying on how complex is the application or how simple it is. Meanwhile, the security of the system must be ensured. Confidentiality, availability and integrity are the main keys for a secure system.
Nowadays, users can share resources from any computer/device, anywhere and everywhere through internet thanks to cloud computing which is typically characterized by the scalable and elastic resources -such as physical servers, applications and any services- that are virtualized and allocated dynamically. Thus, synchronization is required to make sure that all devices are update.
Distributed file systems enable also many big, medium and small enterprises to store and access their remote data exactly as they do locally, facilitating the use of variable resources.
==Overview==▼
▲==Overview==
===History===
Today, there are many implementations of distributed file systems.
The first file servers were developed by researchers in the 1970s, and the Sun's Network File System were disposable in the early 1980.
Before that, people who wanted to share files used the [[sneakernet]] method. Once the computer networks start to progress, it became obvious that the existing file systems had a lot of limitations and were unsuitable for multi-user environments. At the beginning, many users started to use [[FTP]] to share files.<ref>
===Supporting techniques===
Cloud computing use important techniques to enforce the performance of all the system. Modern Data centers provide a huge environment with data center networking (DCN) and consisting of big number of computers characterized by different capacity of storage. [[w:MapReduce|MapReduce]] framework had shown its performance with [[w:Data-intensive computing|Data-intensive computing]] applications in a parallel and distributed system. Moreover, virtualization technique has been employed to provide dynamic resource allocation and allowing multiple operating systems to coexist on the same physical server.
===Applications===
As cloud computing provides a large-scale computing thanks to its ability of providing to the user the needful CPU and storage resources with a complete transparency, it makes it very suitable to different types of applications that require a large-scale distributed processing. That kind of [[w:Data-intensive computing|Data-intensive computing]] needs a high performance file system that can share data between VMs ([[w:Virtual machines|Virtual machine]]).<ref>
The application of the Cloud Computing and Cluster Computing paradigms are becoming increasingly important in the industrial data processing and scientific applications such as astronomy or physic ones that frequently demand a the availability of a huge number on computers in order to lead the required experiments. The cloud computing have represent a new way of using the computing infrastructure by dynamically allocating the needed resources, release them once it's finished and only pay for what they use instead of paying some resources, for a certain time fixed earlier(the pas-as-you-go model). That kind of services is often provide in the context of [[w:SLA|Service-level agreement]].
==Architectures==
Most of distributed file systems are built on the client-server architecture, but yet others decentralized solutions exist as well.
====Client-server architecture====
NFS (network file system) is the one of the most that use this architecture. NFS enable to share files between a certain number of machines on a network as if they were located locally. It provides a standardized view of the local file system. The NFS protocol allows heterogeneous clients (process), probably running on different operating systems and machines, to access the files on a distant server, ignoring the actual ___location of files.
However, relying on a single server makes the NFS protocol suffering form a low availability and a poor scalability. Using multiple servers does not solve the problem since each server is working independently.
The model of NFS is the remote file service. This model is also called the remote access model which is in contrast with the upload/download model:
* remote access model: provides the transparency , the client has access to a file . He can do requests to the remote file(the file remains on the server) <ref>
* upload/download model: the client can access the file only locally. It means that he has to download the file , make the modification and uploaded it again so it can be used by others clients.
The file system offered by NFS is almost the same as the one offered by UNIX systems. Files are hierarchically organized into a naming graph in which directories and files are represented by nodes.
Line 32 ⟶ 34:
It's rather an amelioration of client-server architecture in a way that improve the execution of parallel application. The technique used here is the file-striping one. This technique lead to split a file into several segments in order to save them in multiple servers. The goal is to have access to different parts of a file in parallel.
If the application does not benefit from this technique, then it could be more convenient to just store different files on different servers.
However, when it comes to organize a distributed file system for large data centers such as Amazon and Google that offer services to web clients allowing multiple operations (reading, updating, deleting,...) to a huge amount of files distributed among a massive number of computers, then it becomes more interesting. Note that a massive number of computers opens the door for more hardware failures because more server machines mean more hardware and thus high probability of hardware failures.
=====design principles=====
GFS and HDFS are specifically built for handling [[w:batch processing|batch processing]] on very large data sets.
For that, the following hypotheses must be taken into account
* High availability: The cluster can contain thousands of file servers and some of them can be down at any time
* Servers belong to a rack,a room, a data center, a country and a continent in order to precisely identify its geographical ___location
* The size of file can varied form many gigabytes to many terabytes. The file system should be able to support a massive number of files
* Need to support append operations and allow file contents to be visible even while a file is being written
* Communication is reliable among working machines. TCP/IP is used with an [[w:
=====Examples=====
Line 51 ⟶ 54:
GFS uses [[w:MapReduce|MapReduce]] that allows users to create programs and run them on multiple machines without thinking about the parallelization and load-balancing issues .
GFS architecture is based on a single master, multiple chunckservers and multiple clients.
The master server running on a dedicated node is responsible for coordinating storage resources and managing files's [[w:metadata|metadata]] (such as the equivalent of inodes in classical file systems)
Each file is splited to multiple chunks of 64 MByte. Each chunk is stored in a chunk server.A chunk is identified by a chunk handle, which is a globally unique 64-bit number that is assigned by the master when the chunk is first created.
As said previously, the master maintain all of the files's metadata including their names, directories and the mapping of files to the list of chunks that contain each file’s data.The metadata is kept in the master main memory, along with the mapping of files to chunks. Updates of these data are logged to the disk onto an operation log. This operation log is also replicated onto remote machines. When the log become too large, a checkpoint is made and the main-memory data is stored in a [[w:B-tree|B-tree]] structure to facilitate the mapped back into main memory
For fault tolerance, a chunk is replicated onto multiple chunkservers, by default on three chunckservers.<ref>
The advantage of this system is the simplicity. The master is responsible of allocating the chunk servers for each chunk and it's contacted only for metadata information. For all other data, the client has to interact with chunkservers.
Moreover, the master keeps track of where a chunk is located. However, it does not attempt to keep precisely the chunk locations but occasionally contact the chunk servers to see which chunks they have stored. [lelivre]
GFS is a scalable distributed file system for data-intensive applications
The master does not have a problem of bottleneck due to all the work that has to to accomplish. In fact, when the client want to access a data, it communicates with the master to see which chunk server is holding that data. Once done, the communication is setted up between the client and the concerned chunk server.
In GFS, most files are modified by appending new data and not overwriting existing data. In fact, once written, the files are only read and often only sequentially rather than randomly, and that made this DFS the most suitable for scenarios in which many large files are created once but read many times
Now, let's detail the file access process. When a client wants to write/update to a file, the master should accord a replica for this operation. This replica will be the primary replica since it's the first one that gets the modification from clients.
The process of writing is decomposed into two steps
* sending: First, and by far the most important, the client contacts the master to find out which chunk servers holds the data. So the client is given a list of replicas identifying the primary chunk server and secondaries ones. Then, the client contacts the nearest replica chunk server, and send the data to it. This server will send the data to the next closest one, which then forwards it to yet another replica, and so on. After that, the data have been propagated but not yet written to a file (sits in a cache)
* writing: When all the replicas receive the data, the client sends a write request to the primary chunk server -identifying the data that was sent in the sending phase- who will then assign a sequence number to the write operations that it has received, applies the writes to the file in serial-number order, and forwards the write requests in that order to the secondaries. Meanwhile, the master is kept out of the loop.
Consequently, we can differentiate two types of flows: the data flow and the control flow. The first one is associated to the sending phase and the second one is associated to the writing phase. This assures that the primary chunk server takes the control of the writes order.
Note that when the master accord the write operation to a replica, it increments the chunk version number and informs all of the replicas containing that chunk of the new version number. Chunk version numbers allow to see if any replica didn't make the update because that chunkserver was down
It seems that some new Google applications didn't work well with the 64-megabyte chunk size. To treat that, GFS started in 2004 to implement the [https://en.wikipedia.org/wiki/BigTable BigTable] approach."[http://arstechnica.com/business/2012/01/the-big-disk-drive-in-the-sky-how-the-giants-of-the-web-store-big-data/]
Line 80 ⟶ 82:
{{Cat main|Apache Hadoop}}
HDFS, Hadoop Distributed File System,hosted by Apache Software Foundation, is a distributed file system. It's designed to hold very large amounts of data (terabytes or even petabytes). It's architecture is similar to GFS one, i.e. a master/slave architecture.The HDFS is normally installed on a cluster of computers.
The design concept of Hadoop refers to Google, including Google File System, Google MapReduce and [[w:BigTable|BigTable]]. These three techniques are individually mapping to Hadoop and Distributed File System (HDFS), Hadoop MapReduce Hadoop Base (HBase).<ref>
An HDFS cluster consists of a single NameNode and several Datanode machines. A nameNode, a master server, manages and maintains the metadata of storage DataNodes in its RAM. DataNodes manage storage attached to the nodes that they run on.
The NameNode and Datanode are software programs designed to run on everyday use machines.These machines typically run on a GNU/Linux OS. HDFS can be run on any machine that supports Java and therefore can run either a NameNode or the Datanode software.<ref>
More explicitly, a file is split into one or more equal-size blocks except the last one that could smaller. Each block is stored in multiple DataNodes. Each block may be replicated on multiple DataNodes to guarantee a high availability. By default, each block is replicated three times and that process is called "Block Level Replication".<ref>
The NameNode manage the file system namespace operations like opening, closing, and renaming files and directories and regulates the file access. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for operating read and write requests from the file system’s clients, managing the block allocation or deletion, and replicating blocks.
Line 95 ⟶ 97:
=====Load balancing and rebalancing=====
[[File:
======Load balancing======
Load Balancing is essential for efficient operations in distributed environments. It means distributing the amount of work to do between different nodes in order to get more work done in the same amount of time and clients get served faster.
In our case, consider a large-scale distributed file system. The system contains N chunkservers in a cloud (N can be 1000, 10000, or more) and where a certain number of files are stored. Each file is splitted into several parts/chunks of fixed- size( for example 64 MBytes). The load of a each chunkserver is proportional to the number of chunks hosted by the server
In a load balanced cloud, the resources can be well used while maximizing the performance of MapReduce- based applications.
Line 106 ⟶ 109:
In a cloud computing environment, failure is the norm, and chunkservers may be upgraded, replaced, and added in the system. In addition, files can also be dynamically created, deleted, and appended. An that lead to load imbalance in a distributed file system. It means that the file chunks are not distributed equitably between the nodes.
Distributed file systems in clouds such as GFS and HDFS, rely on central servers (master for GFS and NameNode for HDFS) to manage the metadata and the load balancing. The master rebalances replicas periodically: data must be moved form a DataNode/chumkserver to another one if its free space is below a certain threshold
However, this centralized approach can provoke a bottleneck for those servers as they become unable to manage a large number of file accesses. Consequently, dealing with the load imbalance problem with the central nodes complicate more the situation as it increases their heavy loads. Note that the load rebalance problem is [[w:NP-hard|NP-hard]]
In order to manage large number of chunkservers to work in collaboration, and solve the problem of load balancing in distributed file systems, there are several approaches that have been proposed such as reallocating file chunks such that the chunks can be distributed to the system as uniformly as possible while reducing the movement cost as much as possible
==Communication==
Line 117 ⟶ 120:
The data communication (send/receive) operation transfer the data from the application buffer to the kernel on the machine.[[w:Transmission Control Protocol|TCP]] control the process of sending data and is implemented in the kernel. However, in case of network congestion or errors, TCP may not send the data directly.
While transferring, data from a buffer in the [[w:Kernel (computing)|kernel]] to the application, the machine does not read the byte stream from the remote machine. In fact, TCP is responsible for buffering the data for the application.<ref>
Providing a high level of communication can be done by choosing the buffer-size of file reading and writing or file sending and receiving on application level.
Explicitly, the buffer mechanism is developed using [[w:Linked list|Circular Linked List]].<ref>
If the BufferNode has no free space, it will send a wait signal to the client to tell him to wait until there is available space.<ref>
==Security keys==
In cloud computing, the most important security concepts are confidentiality, availability and integrity.
In fact, confidentiality becomes indispensable in order to keep private data from being disclosed and maintain privacy. In addition, integrity assures that data is not corrupted [Security and Privacy in Cloud Computing].
Line 130 ⟶ 133:
====Confidentiality====
Confidentiality means that data and computation tasks are confidential: neither the cloud provider nor others clients could access to data.
Many researches have been made about confidentiality because it's one of the crucial points that still represent challenges for cloud computing. The lack of trust toward the cloud providers is also a related issue
The risk of an unsecured environment is realized if the service provider can locate consumer's data in the cloud, has the privilege to access and retrieve consumer's data and can understand the meaning of data (types of data, functionalities and interfaces of the application and format of the data).
The geographic ___location of data stores influences on the privacy and confidentiality. Furthermore, the ___location of clients should be taken into account. Indeed, clients in Europe won't be interested by using datacenters located in United States, because that affects the confidentiality of data as it will not be guaranteed. In order to figure out that problem, some Cloud computing vendors have included the geographic ___location of the hosting as a parameter of the service level agreement made with the customer <ref>
An approach that may help to face the confidentiality matter is the data encryption <ref>
====Availability====
Availability is generally treated by [[w:replication|replication]]. Meanwhile, [[w:consistency|consistency]] must to be guaranteed.
However, consistency and availability cannot be achieved at the same time. This means that neither releasing consistency will allow the system to remain available nor making consistency a priority and letting the system sometimes unavailable.
In other hand, data has an identity which is a key that is produced by a one-way cryptographic hash function (e.g. [[w:MD5|MD5]]). Its ___location is the hash function of this key. The key space is partitioned into multiple partitions.<ref>{{harvnb|Nicolas Bonvin, Thanasis G. Papaioannou and Karl Aberer|p=206|id= Bonvin}}</ref>
To maximize data availability data durability, the replicas are placed in different servers (geographically different) because the data availability increase with the geographical diversity.
The process of replication consists of an evaluation of the data availability that must be above a certain minimum. Otherwise, data are replicated to another chunk server. Each partition i has an availability value represented by the following formula:
<math>avail_i=\sum_{i=0}^{|s_i|}\sum_{j=i+1}^{|s_i|} conf_i.conf_j.diversity(s_i,s_j)</math>
where s_i are the servers hosting the replicas, conf_i and conf_j are the confidence of servers i and j (relying on technical factors such as hardware components and non technical ones like the economical and political situation of a country) and the diversity is the geographical distance between s_i and s_j.
====integrity====
Integrity in cloud computing implies data integrity and meanwhile computing integrity. Integrity means data has to be stored correctly on cloud servers and in case of failures or incorrect computing, problems have to be detected.
Data integrity is easy to achieve thanks to cryptography (typically through [[w:Message-Authentication Codes|Message authentication code]], or MACs, on data blocks).<ref>
There are different ways affecting data's integrity either from a malicious event or from administration errors (i.e [[w:Backup|backup]] and restore, data migration, changing memberships in [[w:Peer-to-peer|P2P]] systems).<ref>
It exists some checking mechanisms that check data integrity. For instance, HAIL (HAIL (High-Availability and Integrity Layer) a distributed cryptographic system that allows a set of servers to prove to a client that a stored file is intact and retrievable.<ref>
==Cloud-based Synchronization of Distributed File System==
More and more users have multiple devices with ad hoc connectivity. These devices need to be synchronized. In fact, an important point is to maintain user data by synchronizing replicated data sets between an arbitrary number of servers. This is useful for the backups and also for offline operation. Indeed, when the user network conditions are not good, then the user device will selectively replicate a part of data that will be modified later and off-line. Once the network conditions become good, it makes the synchronization.<ref>
Two approaches exists to tackle with the distributed synchronization issue: the user-controlled peer-to-peer synchronization and the cloud master-replica synchronization approach.
* user-controlled peer-to-peer: software such as [[w:rsync|rsync]] must be installed in all users computers that contain their data. The files are synchronized by a peer-to-peer synchronization in a way that users has to give all the network addresses of the devices and the synchronization parameters and thus made a manual process.
*cloud master-replica synchronization: widely used by cloud services in which a master replica that contains all data to be synchronized is retained as a central copy in the cloud, and all the updates and synchronization operations are pushed to this central copy offering a high level of availability and reliability in case of failures.
Line 169 ⟶ 172:
==Economic aspects==
The cloud computing is growing rapidly. The US government decided to spend 40% of annual growth rate [[w:CAGR|CAGR]] and fixed 7 billion dollars by 2015. Huge number that should be take into consideration.<ref>
More and more companies have been utilizing the cloud computing to manage the massive amount of data and overcome the lack of storage capacities.
Indeed, the companies are enabled to use resources as a service to assure their computing needs without having to invest on infrastructure, so they pay for what they are going to use (Pay-as-you-go model).<ref>
Every application provider has to periodically pay the cost of each server where replicas of his data are stored. The cost of a server is generally estimated by the quality of the hardware, the storage capacities, and its query processing and communication overhead.<ref>
Cloud computing facilitates the tasks for enterprises to scale their services under the client requests.
The pay-as-you-go model has also facilitate the tasks for the startup companies that wish to benefit from compute-intensive business. Cloud computing also offers a huge opportunity to many third-world countries that don't have enough resources, and thus enabling IT services.
Cloud computing can lower IT barriers to innovation.<ref>
Although the wide utilization of cloud computing, an efficient sharing of large volumes of data in an untrusted cloud is still a challenging research topic.
Line 186 ⟶ 189:
==Bibliography==
* {{cite book
| last1 = Andrew
| first1 = S.Tanenbaum
Line 195 ⟶ 196:
| first2 = van Steen
| title = Distributed file systems principles and paradigms
| id = Tanenbaum
}}
* {{cite web
| id = Fabio
| author = Fabio Kon
| url = http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.4609
Line 207:
| website = http://www.citeulike.org/group/3944/article/3390802
}}
* {{
| id = sun
| url = http://www.cse.chalmers.se/~tsigas/Courses/DCDSeminar/Files/afs_report.pdf
| title = Distributed file systems – an overview
Line 217 ⟶ 216:
#* {{cite journal
| id = Zhang
| last1 = Zhang
| first1 = Qi-fei
Line 236 ⟶ 234:
#* {{cite journal
| id = Azzedin
| last1 = Farag
| first1 = Azzedin
Line 246 ⟶ 243:
| doi = 10.1109/CTS.2013.6567222
| others = Information and Computer Science Department King Fahd University of Petroleum and Minerals
| pages =
}}
#* {{
| id = Krzyzanowski
| last1 = Paul
| first1 = Krzyzanowski
Line 260 ⟶ 256:
#* {{cite journal
| id = Kobayashi
| last1 = K.
| first1 = Kobayashi
Line 279 ⟶ 274:
#* {{cite journal
| id = Humbetov
| last1 = Shamil
| first1 = Humbetov
Line 289 ⟶ 283:
| doi = 10.1109/ICAICT.2012.6398489
| others = Department of Computer Engineering Qafqaz University Baku, Azerbaijan
| pages =
}}
#* {{cite journal
| id = Hsiao
| last1 = Hung-Chang
| first1 = Hsiao
Line 309 ⟶ 302:
| doi = 10.1109/TPDS.2012.196
| others = National Cheng Kung University, Tainan
| pages =
}}
#* {{cite journal
| id = Fan
| last1 = Kai
| first1 = Fan
Line 329 ⟶ 321:
| doi = 10.1109/INCoS.2013.14
| others = State Key Lab. of Integrated Service Networks, Xidian Univ., Xi'an, China
| pages =
}}
#* {{cite journal
| id = Upadhyaya
| last1 = B.
| first1 = Upadhyaya
Line 353 ⟶ 344:
| doi = 10.1109/NCM.2008.164
| others = Sch. of Bus. IT, Kookmin Univ., Seoul
| pages =
}}
#* {{cite journal
| id = Adamov
| last1 = Abzetdin
| first1 = Adamov
Line 367 ⟶ 357:
| doi = 10.1109/ICAICT.2012.6398484
| others = Comput. Eng. Dept., Qafqaz Univ., Baku, Azerbaijan
| pages =
}}
#* {{cite journal
| id =Brandt
| last1 = S.A.
| first1 = Brandt
Line 387 ⟶ 376:
| doi = 10.1109/MASS.2003.1194865
| others = Storage Syst. Res. Center, California Univ., Santa Cruz, CA, USA
| pages =
}}
#* {{cite journal
| id =Brandt
| last1 = Garth A.
| first1 = Gibson
Line 405 ⟶ 393:
#* {{cite journal
| id =Khaing
| last1 = Cho Cho
| first1 = Khaing
Line 416 ⟶ 403:
| url =http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6045066&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6045066
| doi = 10.1109/CCIS.2011.6045066
| pages =
}}
#* {{cite journal
| id =Brandt
| last1 = S.A.
| first1 = Brandt
Line 436 ⟶ 422:
| doi = 10.1109/SWS.2011.6101263
| others = PCN&CAD Center, Beijing Univ. of Posts & Telecommun., Beijing, China
| pages =
}}
#* {{cite journal
| id = Ghemawat
| last1 = Sanjay
| first1 = Ghemawat
Line 453 ⟶ 438:
| year = 2003
| doi = 10.1145/945445.945450
| pages =
}}
#Security Concept
#* {{cite journal
| id = Vecchiola
| last1 = C
| first1 = Vecchiola
Line 472 ⟶ 456:
| doi = 10.1109/I-SPAN.2009.150
| others = Dept. of Comput. Sci. & Software Eng., Univ. of Melbourne, Melbourne, VIC, Australia
| pages =
}}
#* {{cite journal
| id = Hongtao
| last1 = Du
| first1 = Hongtao
Line 489 ⟶ 472:
| doi = 10.1109/MIC.2012.6273264
| others = Comput. Coll., Northwestern Polytech. Univ., Xi''An, China
| pages =
}}
#* {{cite journal
| id =Scott
| last1 = A.Brandt
| first1 = Scott
Line 510 ⟶ 492:
#* {{cite journal
| id = Kaufman
| last1 = Lori M.
| first1 = Kaufman
Line 519 ⟶ 500:
| url =http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5189563
| doi = 10.1109/MSP.2009.87
| pages =
}}
#* {{cite journal
| id = HAIL
| last1 = Kevin
| first1 = D. Bowers
Line 536 ⟶ 516:
| url =http://dl.acm.org/ft_gateway.cfm?id=1653686&ftid=707973&dwn=1&CFID=382853364&CFTOKEN=27119971
| doi = 10.1145/1653662.1653686
| pages =
}}
#* {{cite journal
| id = Ari Juels
| last1 = Ari
| first1 = Juels
Line 551 ⟶ 530:
| url =http://dl.acm.org/ft_gateway.cfm?id=2408793&ftid=1338744&dwn=1&CFID=382853364&CFTOKEN=27119971
| doi = 10.1145/2408776.2408793
| pages =
}}
#* {{cite journal
| id = Jing
| last1 = Zhang
| first1 = Jing
Line 571 ⟶ 549:
| doi = 10.1109/Grid.2012.17
| others = Dept. of Comput. Sci., Hefei Univ. of Technol., Hefei, China
| pages =
}}
#* {{cite journal
| id = Pan
| last1 = A.
| first1 = Pan
Line 593 ⟶ 570:
| doi = 10.1109/SC.Companion.2012.103
| others = Dept. of Electr. & Comput. Eng., Purdue Univ., West Lafayette, IN, USA
| pages =
}}
#* {{cite journal
| id = Fan-Hsun
| last1 = Tseng
| first1 = Fan-Hsun
Line 613 ⟶ 589:
| doi = 10.1109/ISPACS.2012.6473485
| others = Dept. of Comput. Sci. & Inf. Eng., Nat. Central Univ., Taoyuan, Taiwan
| pages =
}}
#* {{cite journal
| id = Di Sano
| last1 = M
| first1 = Di Sano
Line 633 ⟶ 608:
| doi = 10.1109/WETICE.2012.104
| others = Dept. of Electr., Electron. & Comput. Eng., Univ. of Catania, Catania, Italy
| pages =
}}
#* {{cite journal
| id = Zhonghua
| last1 = Sheng
| first1 = Zhonghua
Line 653 ⟶ 627:
| doi = 10.1109/CSC.2011.6138512
| others = Dept. of Comput. Sci. & Eng., Hong Kong Univ. of Sci. & Technol., Hong Kong, China
| pages =
}}
#* {{cite journal
| id = Zhifeng
| last1 = Zhifeng
| first1 = Xiao
Line 666 ⟶ 639:
| url =http://ieeexplore.ieee.org.docproxy.univ-lille1.fr/stamp/stamp.jsp?tp=&arnumber=6238281
| doi = 10.1109/SURV.2012.060912.00182
| pages =
}}
#* {{
| id = Horrigan
| last1 = John B
| first1 = Horrigan
Line 680 ⟶ 652:
#* {{cite journal
| id = Stephen
| last1 = Stephen
| first1 = S. Yau
Line 689 ⟶ 660:
| year = 2010
| url = http://www.ijsi.org/ch/reader/create_pdf.aspx?file_no=i68&flag=&journal_id=ijsi&year_id=2010
| pages =
}}
#* {{cite journal
| id = Plantard
| last1 = T.
| first1 = Plantard
Line 706 ⟶ 676:
| url = http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6650119
| doi = 10.1109/TIFS.2013.2287732
| pages =
}}
#* {{cite journal
| id = Michael
| first1 = Michael
| last1 = Naehrig
Line 721 ⟶ 690:
| url = http://dl.acm.org/ft_gateway.cfm?id=2046682&ftid=1047116&dwn=1&CFID=385102978&CFTOKEN=71213808
| doi = 10.1145/2046660.2046682
| pages =
}}
#* {{cite journal
| id = Miranda
| first1 = Miranda
| last1 = Mowbray
Line 739 ⟶ 707:
#* {{cite journal
| id = Vogels
| last1 = Vogels
| first1 = Werner
Line 748 ⟶ 715:
| url = http://dl.acm.org/ft_gateway.cfm?id=1435432&ftid=574047&dwn=1&CFID=267174164&CFTOKEN=52170875
| doi = 10.1145/1435417.1435432
| pages =
}}
#* {{cite journal
| id = Bonvin
| last1 = Nicolas
| first1 = Bonvin
Line 765 ⟶ 731:
| url = http://dl.acm.org/ft_gateway.cfm?id=1807162&ftid=809875&dwn=1&CFID=385102978&CFTOKEN=71213808
| doi = 10.1145/1807128.1807162
| pages =
}}
#* {{cite journal
| id = Kraska
| last1 = Tim
| first1 = Kraska
Line 783 ⟶ 748:
| year = 2009
| url = http://dl.acm.org/ft_gateway.cfm?id=1687657&type=pdf&CFID=385102978&CFTOKEN=71213808
| pages =
}}
#* {{cite journal
| id = Abadi
| last1 = Daniel
| first1 = J. Abadi
Line 798 ⟶ 762:
#* {{cite journal
| id = Vogels
| last1 = Ari
| first1 = Juels
Line 809 ⟶ 772:
| url = http://dl.acm.org/ft_gateway.cfm?id=2408793&ftid=1338744&dwn=1&CFID=385102978&CFTOKEN=71213808
| doi = 10.1145/2408776.2408793
| pages =
}}
#* {{cite journal
| id = Vogels
| last1 = Ari
| first1 = Juels
Line 826 ⟶ 788:
| url = http://dl.acm.org/ft_gateway.cfm?id=1315317&ftid=476752&dwn=1&CFID=385102978&CFTOKEN=71213808
| doi = 10.1145/1315245.1315317
| pages =
}}
#* {{cite journal
| id = Ari
| last1 = Ari
| first1 = Ateniese
Line 853 ⟶ 814:
| url = http://dl.acm.org/ft_gateway.cfm?id=1315318&ftid=481834&dwn=1&CFID=385102978&CFTOKEN=71213808
| doi = 10.1145/1315245.1315318
| pages =
}}
#* {{cite journal
| id = Giuseppe
| last1 = Giuseppe
| first1 = Ateniese
Line 875 ⟶ 835:
#* {{cite journal
| id = Vogels
| last1 = Chris
| first1 = Erway
Line 890 ⟶ 849:
| url = http://dl.acm.org/ft_gateway.cfm?id=1653688&ftid=707975&dwn=1&CFID=385102978&CFTOKEN=71213808
| doi = 10.1145/1653662.1653688
| pages =
}}
#synchronization
#* {{cite journal
| id = Uppoor
| last1 = S.
| first1 = Uppoor
Line 908 ⟶ 866:
| url = http://ieeexplore.ieee.org.docproxy.univ-lille1.fr/stamp/stamp.jsp?tp=&arnumber=5613087
| doi = 10.1109/CLUSTERWKSP.2010.5613087
| pages =
| others =Inst. of Comput. Sci. (ICS), Found. for Res. & Technol. - Hellas (FORTH), Heraklion, Greece
}}
Line 914 ⟶ 872:
#* {{cite journal
| id = Kaufman
| last1 = Lori M.
| first1 = Kaufman
Line 923 ⟶ 880:
| url = http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5189563
| doi = 10.1109/MSP.2009.87
| pages =
}}
#* {{cite journal
| id = Kaufman
| last1 = Zhi
| first1 = Lia
Line 946 ⟶ 902:
#* {{cite journal
| id = Angabini
| last1 = A.
| first1 = Angabini
Line 962 ⟶ 917:
| doi = 10.1109/3PGCIC.2011.37
|others= Sch. of Electr. & Comput. Eng., Univ. of Tehran, Tehran, Iran
| pages =
}}
{{Uncategorized|date=December 2013}}
|