AFS resists new approaches brought by the cloud


The cloud has brought new capabilities to users, especially around ubiquitous access, but AFS still makes sense for large-scale desktop environments

Distributed file system vendor Auristor promotes AFS approaches to its users.

Quite famous in the past in research and education, Andrew File System better known as AFS, continues to drive its own trajectory to control its destiny.

AFS was invented at Carnegie Mellon University in Pittsburgh, Pennsylvania in the 1980s to solve problems with centralized file services in a distributed environment. As AFS delivered good early results, some people left and started a dedicated company around it called Transarc.

Transarc Corporation is the commercial arm of the project and was purchased by IBM in 1994. From there it became the IBM Transarc Lab in 1999, then the IBM Pittsburgh Lab in 2001, before being consolidated in Raleigh, NC During this period, IBM decided to clone AFS into OpenAFS in 2000, and donate the code to the open source community.

Predicting the future of cloud and open source

Heikki Nousiainen, CTO of Aiven, delivers his predictions for the future of cloud and open source technology. Read here

Interestingly, we found that one of the founders of Transarc, and therefore of AFS, is Michael kazar, who then co-founded and was CTO of Spinnaker Networks (acquired by NetApp in 2004). Most recently, he was co-founder and CTO of Avere Systems (acquired by Microsoft in 2018). In 2016, Kazar received the ACM Software System Award for his work on AFS development.

To refresh our readers, let’s summarize AFS’s role in an IT environment. The first idea is to share files between a large number of machines, some of them being servers where data is stored, while others are clients that access and consume the data. Both of course can create and generate new data. That said, we understand why it fits perfectly into large horizontal environments such as universities, research centers and, more generally, university sites.

The main features of AFS and derivatives are the independence of data localization around / afs materializing a global virtual namespace on servers and clients, strong and strict security mechanisms powered by Kerberos authentication and multiplatform availability. All data is propagated to machines by replication, which means that files are copied remotely to consumer machines. Each file is fully copied and acts as a cached copy. This caching behavior is a primary attribute of AFS and helps maintain a fairly good level of performance. Subsequent requests on the same file are honored by the local copy. All changes applied to local data are transparently sent back to the central servers where the primary copy resides through a subtle callback mechanism.

This is particularly the case with AuriStor, an American company we met recently during the 40th IT Press Tour, which serves as the developer and maintainer of the solution. Auristor has taken the lead in developing and promoting the new wave of AFS with its AuriStorFS system, urging AFS users to migrate to AuriStorFS without any data loss and with limited downtime, and benefit from several key improvements in performance, scalability and security. These include:

  • network transfer up to 8.2 Gb / s per listener thread;
  • increased file server performance, as the AuriStor environment can support the load of 60 OpenAFS 1.6 file servers;
  • better UBIK management, mandatory locking, by ACLS file;
  • new platforms such as MacOS, Linux, iOS, as well as the recent validation with Red Hat. This distribution allows for more flexible deployment models.

You will find more information on these functional differences here.

“AuriStor has provided an impressive list of performance, security and scalability improvements that make AuriStorFS a serious alternative to traditional approaches,” said Philippe Nicolas, analyst at Coldago Research.

The other key value of AFS in the world, and of AuriStor in particular with all these improvements, is its large multicontinental deployment capacity, unique in the market. The cloud has evolved operations for several years with the easy propagation and proliferation of data, but this aspect is completely hidden, as it is operated by cloud entities rather than client-side machines. Other caching techniques with a star topology are defined to take advantage of cloud object storage and edge caching files, thus exposing industry standard file sharing protocols. This would mean that AuriStor plays a leading role in distributed computing use cases.

About Jon Moses

Check Also

IBM launches fourth-generation LinuxONE servers

IBM has unveiled the next generation of its LinuxONE server, which uses the Telum processor …

Leave a Reply

Your email address will not be published.