anynines website

Categories

Series

Heiko Krämer

Published at 08.09.2015

How-To’s & Tutorials

One Swift to Store Them All – Part 1

A thing or two about conversion

Swift is one of the main services to storing all images, movies or other types of files. All assets, to be blunt. It’s an Object-Store of OpenStack, very similar to the large and famous service from Amazon S3. Swift offers a variety of useful features, but I think the most popular one, commonly used by our customers is the TempURL.

While writing an application requiring access to an object-store such as Swift, this service will provide you with a wide range of possibilities.

  1. GET your objects via your application

a) In this case you will drag all your objects through your application server and the bandwidth will grow up.

  1. GET your objects via client requests in public mode

a) If the application tells clients where all objects are being stored, you will use the main functionality called CDN (content delivery network). Remember, in this case all files are public and each can be downloaded.

  1. GET your objects via client requests in private mode

a) It’s basically the same as above, but the URL will extend with some additional attributes. Swift will check these attributes with a stored key. If it’s correct, the client can access the file for x seconds (it’s configurable).

Table of Contents

  • Past and Future – get to know our Swift story
  • Over the past year we’ve learnt a lot about it. What have we decided to change?
Past and Future – get to know our Swift story

We’ve been testing Swift for 2 years now, in a really small cluster, only for some customers with small load peaks. Our first goal was to build a small Swift cluster in order to check how Swift is running in a small production mode. Some problems become visible only if the service is run with the real customers.

So our first setup was Swift version 1.7.2 (Folsom), then we upgraded the cluster to 1.13.0 (Icehouse). This process was really simple and without any downtime. We built the cluster, as I mentioned above, with two servers with a raid 5 of 6TB disk capacity each. This raid (sda5) stored account/container/object data so all Swift processes used this one raid. Really badly, as we know already ☺

The rings were built with 20 2 1, so over a million partitions for each ring and a replica of two.

This means, on each host, there was over a million folders (one folder is one partition) of each service, such as account/container/object. This makes a massive impact for replicator and updater processes run on these small hosts.

Over the past year we’ve learnt a lot about it. What have we decided to change?

We separated the proxy nodes and the memcache to other machines which improved the performance of the storage servers but also gave the possibility to use different ring files.

In small clusters, if you expand with new servers it’s possible to get some 404 of some objects because your cluster is rebalancing. So the proxy nodes think the object is stored on a particular partition but in reality – it isn’t, because the transfer isn’t complete (rebalancing). This does not have to happen every time, but is nice if you have such possibilities.

Next step was to use more storage servers with other hardware setup. Now we’re using only JBOD SATA HDD for storing the objects and SSD’s for the account and container. Therefore you have now a lot of HDD’s in your systems but the partitions will be better distributed. This means a replication and update run will be much faster.

We increased the replicas to 3 for a more secure cluster, if anything is wrong with nodes or HDD’s. In addition, we would like to decrease the partition size (currently over a million), but this step isn’t possible without high risk if you expand an existing cluster with data. 

The last hardware improvement was the network bandwidth which we switched to 10G (backnet + WAN). The last requirement was the the Swift version. We would like to use Swift version 2.2.2 (Kilo) directly on our new servers but without a copy from the old to the new cluster.

This was a really short overview about our intentions and problems. The next part will show a short overview about the steps from the old to the new production.

Read more in – One Swift to store them all – Part 2.

The story continues

Check out the full series here:

© anynines GmbH 2024

Imprint

Privacy Policy

About

© anynines GmbH 2024