Configuring Openshift Origin with S3-based persistent shared storage
This post will describe the steps that I take to provide shared storage for OpenShift Origin M4 installation. There were some difficulties that must be solved by non standard methods.
writing applications for cloud platforms requires us to avoid writing to local filesystems. There is no support for storage shared between gears. But we still need support multiple PHP applications that stores their attachment in the local filesystem with minimal code changes. So we need a way to quickly implement shared storage between gears of the same application. And maybe we could loosen the application isolation requirement just for the shared storage.
S3FS Fuse needs a newer version of Fuse than that were packaged with RHEL 6. And Fuse need a bit of patch to allow mounting the S3 storage using contexts other than fusefs_t.
Access control for a specified directory is cached in a process's lifetime, so if a running httpd is denied access, make sure that it is restarted after we remount the S3 storage in a different context.
FUSE_OPT_KEY("default_permissions", KEY_KERN_OPT),
FUSE_OPT_KEY("context=", KEY_KERN_OPT),
FUSE_OPT_KEY("fscontext=", KEY_KERN_OPT),
FUSE_OPT_KEY("defcontext=", KEY_KERN_OPT),
FUSE_OPT_KEY("rootcontext=", KEY_KERN_OPT),
FUSE_OPT_KEY("max_read=", KEY_KERN_OPT),
FUSE_OPT_KEY("max_read=", FUSE_OPT_KEY_KEEP),
FUSE_OPT_KEY("user=", KEY_MTAB_OPT),
The bold part are the inserted lines. Do the changes, save, then compile the whole thing.
We now may download the latest s3fs package.
Put your AWS credentials or other access key in .passwd-s3fs file. The syntax is :
chmod 600 ~/.passwd-s3fs
Lets mount the S3 storage.
s3fs always-on-non-core /var/[our-shared-folder] -o url=http://[server]:[port]/ -o use_path_request_style -o context=system_u:object_r:openshift_rw_file_t:s0 -o umask=000 -o allow_other
Change the our-shared-folder part to the mount point we want to use, and the server and port part to the S3 service endpoint. If we are using real S3, we omit the -o url part. And you also might want to omit use_path_request_style to use newer API style (virtual-hosted). We only need use_path_request_style if we are using an S3 compatible storage.
# mkdir /var/[our-shared-folder]/[appid]
Create a .openshift/action_hooks/build file inside the applications git repository with u+x bit set.
Fill it with :
#! /bin/bash
ln -sf /var/[our-shared-folder]/[appid] $OPENSHIFT_REPO_DIR/[appfolder]
Change the appfolder to the folder we want to store the attachments under the application's root directory. Afterwards we could create a file in the folder using PHP like :
$f = fopen("/file1.txt","wb");
Reference :
https://code.google.com/p/s3fs/issues/detail?id=170
http://tecadmin.net/mount-s3-bucket-centosrhel-ubuntu-using-s3fs/
Requirement
When hosting applications on Openshift Origin platform, we are confronted with a bitter truth :writing applications for cloud platforms requires us to avoid writing to local filesystems. There is no support for storage shared between gears. But we still need support multiple PHP applications that stores their attachment in the local filesystem with minimal code changes. So we need a way to quickly implement shared storage between gears of the same application. And maybe we could loosen the application isolation requirement just for the shared storage.
Basic Idea
The idea is to mount an S3 API-based storage on all nodes. And then each gear could refer to application's folder inside the shared storage to store and retrieve file attachments. My implementation uses an EMC VIPR shared storage with S3 API, which I assume would be harder than if we were using a real Amazon S3 storage. I used S3FS implementation from https://github.com/s3fs-fuse/s3fs-fuse to mount S3 storage as folders.Pitfalls
Openshift gears are not allowed to write to arbitrary directories. The gears can't even peek to other gears directories, which is restricted using SELinux Multi Category Security (MCS). Custom SElinux policies were implemented, which is complex enough for a run-of-the-mill admin to understand. So mounting the S3 storage in the nodes is only half of the battle.S3FS Fuse needs a newer version of Fuse than that were packaged with RHEL 6. And Fuse need a bit of patch to allow mounting the S3 storage using contexts other than fusefs_t.
Access control for a specified directory is cached in a process's lifetime, so if a running httpd is denied access, make sure that it is restarted after we remount the S3 storage in a different context.
Step-by-step
First, make sure that your system is clear from the old fuse package, then download latest fuse version from sourceforge. Extract it.# wget http://downloads.sourceforge.net/project/fuse/fuse-2.X/2.9.3/fuse-2.9.3.tar.gz # tar xzf fuse-2.9.3.tar.gz # cd fuse-2.9.3 # ./configure --prefix=/usr
# export PKG_CONFIG_PATH=/usr/lib/pkgconfig:/usr/lib64/pkgconfig/
we need to add missing context option to lib/mount.c.FUSE_OPT_KEY("default_permissions", KEY_KERN_OPT),
FUSE_OPT_KEY("context=", KEY_KERN_OPT),
FUSE_OPT_KEY("fscontext=", KEY_KERN_OPT),
FUSE_OPT_KEY("defcontext=", KEY_KERN_OPT),
FUSE_OPT_KEY("rootcontext=", KEY_KERN_OPT),
FUSE_OPT_KEY("max_read=", KEY_KERN_OPT),
FUSE_OPT_KEY("max_read=", FUSE_OPT_KEY_KEEP),
FUSE_OPT_KEY("user=", KEY_MTAB_OPT),
# make # make install # ldconfig
# modprobe fuse
# pkg-config --modversion fuse
# wget https://github.com/s3fs-fuse/s3fs-fuse/archive/master.zip
# unzip s3fs-fuse-master.zip
# cd s3fs-fuse-master
# ./configure --prefix=/usr/local
# make
# make install
accessKeyId:secretAccessKey
orbucketName:accessKeyId:secretAccessKey
Ensure the ~/.passwd-s3fs is only readable to the user.s3fs always-on-non-core /var/[our-shared-folder]
Change the our-shared-folder part to the mount point we want to use, and the server and port part to the S3 service endpoint. If we are using real S3, we omit the -o url part. And you also might want to omit use_path_request_style to use newer API style (virtual-hosted). We only need use_path_request_style if we are using an S3 compatible storage.
Configure the applications
As root in each node, create a folder for the application.# mkdir /var/[our-shared-folder]
Create a .openshift/action_hooks/build file inside the applications git repository with u+x bit set.
Fill it with :
#! /bin/bash
ln -sf /var/[our-shared-folder]
Change the appfolder to the folder we want to store the attachments under the application's root directory. Afterwards we could create a file in the folder using PHP like :
$f = fopen("
Reference :
https://code.google.com/p/s3fs/issues/detail?id=170
http://tecadmin.net/mount-s3-bucket-centosrhel-ubuntu-using-s3fs/
Comments