Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Installing Project-Fifo SmartOS Cloud Orchestrator (Part 1/2 - LeoFS)
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Installing Project-Fifo SmartOS Cloud Orchestrator (Part 1/2 - LeoFS)

Project-Fifo is an Open Source SmartOS Cloud Orchestrator developed by Heinz N. Gies, a.k.a. Licenser.

Latest version is 0.7, released on October 12, 2015. This how-to will guide you through installing this version.

As you may already know, Project-Fifo is at the core of our vRocket.io SmartVPS infrastructure, and we've been using it for quite awhile.

Even though it's "just" in version 0.7 (though internally we think of it as 3.0, as that's how many features 0.7 brought in comparison to 0.6), it is a feature-packed, mature, and production-ready product (and comes with Commercial Support as well).

Installing Project-FiFo Orchestrator

The following guide will take you through the tasks required to install a Project-Fifo orchestrator. While this is intended for development environments, along the way we will make notes on what should be done in addition to make your install a production-worthy setup.

Pre-requisites

Before starting, make sure you have the following ready:

  • A physical host (or sandbox) with a SmartOS hypervisor installed
  • Lasted version of SmartOS that is officially tested w/ FiFo is 20150108T111855Z
  • For production environments, you should, ideally, have 5 hosts ready to provide utmost redundancy for FiFo deployment, but for a quick start a minimum of 2 hosts will do (you can always expand later by spinning up more instance of FiFo)

Resource requirements:

  • LeoFS will need 3GB of RAM per Zone
  • FiFo instance also need 3GB of RAM per Zone

Step #1 - Installing LeoFS

LeoFS is Project-FiFo's storage platform of choice. It is an unstructured, highly available, eventually consistent and distributed S3-compatible Object Store. You may think of it as your own, local version of Amazon's S3.

Your cloud's datasets as well as Virtual Machine backups will be stored on this storage. It is highly recommended that at a very minimum 2 LeoFS zones are set up, but like for the whole Project-FiFo itself, for utmost redundancy and availability, 5 is the sweet number.

1.1 LeoFS Internals

LeoFS Cluster consists of 3 services written in Erlang:

  • LeoFS Gateway - handles http requests and responses from clients (fifo nodes)
  • LeoFS Storage - Handles GET, PUT and DELETE objects as well as metadata (and this is where your data physically resides)
  • LeoFS Manager - monitors LeoFS Gateway and the LeoFS Storage Nodes

Production Deployment Note

In production environments it is recommended that you have multiple storage nodes distributed across multiple physical servers. Do note that your actual consistency level has nothing to do with number of physical storage nodes, but it has everything to do with the configured options within LeoFS. Very important to note is that once your consistency levels have been defined and your storage cluster started, they CANNOT be changed!

1.2 Installing LeoFS

In this part, we will set up 2 LeoFS zones, as this is the minimum recommended by Project-FiFo. Remember - 5 is best practice for production, with (in our humble opinion) at least 3 serving as storage nodes. That's at least how we do it at vRocket.

Once you complete this part we should have:

  • LeoFS-01 - 10.88.88.105
    • Manager0, Gateway0, Storage0 - on host-01
  • LeoFS-02 - 10.88.88.106
    • Manager1, Gateway1, Storage1 - on host-01 (if dev) or host-02 (if rel production environment)

1.2.1 Creating LeoFS Zones

From the GZ (Global Zone) on your SmartOS hypervisor, we will first install the base dataset (SmartOS Minimal-64bit 14.4.2), which will run our LeoFS as well as Project-Fifo zones.

# SmartOS Live Image v0.147+ build: 20151001T070028Z
# let's update our images/datasets repository
[root@node01 ~] imgadm update


# now let's import the SmartOS 14.4.2 LTS image
[root@node01 ~] imgadm import 1bd84670-055a-11e5-aaa2-0346bb21d5a1
Importing 1bd84670-055a-11e5-aaa2-0346bb21d5a1 ([email protected]) from "https://images.joyent.com"
Gather image 1bd84670-055a-11e5-aaa2-0346bb21d5a1 ancestry
Must download and install 1 image (24.2 MiB)
Download 1 image     [==========================>] 100%  24.30MB   8.78MB/s     2s
Downloaded image 1bd84670-055a-11e5-aaa2-0346bb21d5a1 (24.2 MiB)
...aaa2-0346bb21d5a1 [==========================>] 100%  24.30MB   7.95MB/s     3s
Imported image 1bd84670-055a-11e5-aaa2-0346bb21d5a1 ([email protected])


# and let's confirm it's there
[root@node01 ~] imgadm list | grep 1bd84670-055a-11e5-aaa2-0346bb21d5a1
1bd84670-055a-11e5-aaa2-0346bb21d5a1  minimal-64-lts  14.4.2   smartos  2015-05-28T16:53:47Z

Now it's time to create 2 SmartOS Zones using this image. For IP addresses we picked:

  • 10.88.88.105 - LeoFS-01
  • 10.88.88.106 - LeoFS-02

You should pick whatever internal IP addressing scheme you're using on your management VLAN.

Using your favorite editor (nano, vi) create a file in your Global Zone called leofs-01.json, and put the following JSON structure into it (replacing the IP address part with your own).

If you're wondering where on your file system, best choice is usually /opt/ as files will stay there even after reboot.

{
 "autoboot": true,
 "brand": "joyent",
 "image_uuid": "1bd84670-055a-11e5-aaa2-0346bb21d5a1",
 "max_physical_memory": 3072,
 "cpu_cap": 100,
 "alias": "LeoFS-01",
 "quota": "80",
 "resolvers": [
  "8.8.8.8",
  "8.8.4.4"
 ],
 "nics": [
  {
   "interface": "net0",
   "nic_tag": "admin",
   "ip": "10.88.88.105",
   "gateway": "10.88.88.1",
   "netmask": "255.255.255.0"
  }
 ]
}

While we're in config creation mode, let's also create another file called leofs-02.json and put the following JSON structure into it (again, replace with your own IP's). Ideally, you'll do this on your second SmartOS physical host, unless you're deploying FiFo in a sandbox to play with.

{
 "autoboot": true,
 "brand": "joyent",
 "image_uuid": "1bd84670-055a-11e5-aaa2-0346bb21d5a1",
 "max_physical_memory": 3072,
 "cpu_cap": 100,
 "alias": "LeoFS-02",
 "quota": "80",
 "resolvers": [
  "8.8.8.8",
  "8.8.4.4"
 ],
 "nics": [
  {
   "interface": "net0",
   "nic_tag": "admin",
   "ip": "10.88.88.106",
   "gateway": "10.88.88.1",
   "netmask": "255.255.255.0"
  }
 ]
}

Now, let's use vmadm to create the two LeoFS Zones.

on your first SmartOS node run

[root@node01 ~] vmadm create -f leofs-01.json

# on your second SmartOS node (if production)
# or on the same node if just playing around, run
[root@node02 ~] vmadm create -f leofs-02.json

Great, at this point running vmadm list you should see the 2 LeoFS nodes we created. Note their UUID's as you'll need it to log into them from SmartOS GZ Console.

1.2.2 LeoFS-01 Zone Configuration

Let's log into the first LeoFS Zone to install the following services: Manager, Gateway, and Storage as outlined in the table in section 1.2 above.

Starting with 14.4.0 Datasets, Joyent introduced signed packages. Starting with Version 0.7 - FiFo has now also started signing it's packages. To properly install FiFo packages, the FiFo public key is required and you'll see it being installed below.

# login to the LeoFS-01 zone using it's UUID. Ours, as seen above, starts with 5caf83d3
# zlogin <leofs-01-uuid>

[root@node01 ~] zlogin 5caf83d3-b5fe-43f3-8264-259476c93b59
[Connected to zone '5caf83d3-b5fe-43f3-8264-259476c93b59' pts/2]
   __        .                   .
 _|  |_      | .-. .  . .-. :--. |-
|_    _|     ;|   ||  |(.-' |  | |
  |__|   `--'  `-' `;-| `-' '  ' `-'
                   /  ; Instance (minimal-64-lts 14.4.2)
                   `-'  https://docs.joyent.com/images/smartos/minimal
[root@5caf83d3 ~] 

# to make it easier to know where we are when we're in the zone
# let's set up a hostname that is a bit more human readable
[root@5caf83d3 ~] hostname leofs-01

# exit and then come back in, and your prompt will now show [root@leofs-01 ~]

# now that we're in, let's download a couple of things for pkgin
[root@leofs-01 ~] curl -O https://project-fifo.net/fifo.gpg
[root@leofs-01 ~] gpg --primary-keyring /opt/local/etc/gnupg/pkgsrc.gpg --import < fifo.gpg
[root@leofs-01 ~] gpg --keyring /opt/local/etc/gnupg/pkgsrc.gpg --fingerprint

# now we'll set a VERSION variable to "rel" which stands for "release" version of fifo
[root@leofs-01 ~] VERSION=rel

# then, let's back up our current repositories config file for a good measure
# and insert fifo's package repo path into it
[root@leofs-01 ~] cp /opt/local/etc/pkgin/repositories.conf /opt/local/etc/pkgin/repositories.conf.original
echo "http://release.project-fifo.net/pkg/${VERSION}" >> /opt/local/etc/pkgin/repositories.conf

# now it's time to update the package manager info
[root@leofs-01 ~] pkgin -fy up

# and install some core utils as well as leo manager, gateway and storage
[root@leofs-01 ~] pkgin install coreutils sudo gawk gsed nano
[root@leofs-01 ~] pkgin install leo_manager leo_gateway leo_storage

Before we begin, let's generate a random cookie which will be used as a distributed cookie in all LeoFS config files. Think of it as a password for all LeoFS pieces that need to talk to one another (Gateway, Manager, Storage Node, etc.).

[root@leofs-01 ~] openssl rand -base64 32 | fold -w16 | head -n1
uHFzR0MRB/wyPD/X

Note this down:

The cookie generated will be used later on below when configuring LeoFS services. We'll note (in our case) uHFzR0MRB/wyPD/X somewhere. In your case, it'll be another random string.

1.2.2.1 Configuring LeoFS Manager

Edit the /opt/local/leo_manager/etc/leo_manager.conf file first, and make it look like so, replacing the distributed_cookie value with the one you actually generated in step 1.2.2 above.

nodename = [email protected]            # this is this manager's name and IP address
distributed_cookie = uHFzR0MRB/wyPD/X       # common cookie we generated above
manager.partner = [email protected]     # partner is the next manager (LeoFS-02) not yet setup

# VERY IMPORTANT - THESE CAN'T BE CHANGED LATER!!!
# SO MAKE SURE TO SET YOUR CONSISTENCY SETTINGS RIGHT THE FIRST TIME

# A number of replicas
consistency.num_of_replicas = 1            

# A number of replicas needed for a successful WRITE operation
consistency.write = 1

# A number of replicas needed for a successful READ operation
consistency.read = 1

# A number of replicas needed for a successful DELETE operation
consistency.delete = 1 

↳ Make sure to understand what your consistency level should be by studying the Consistency Levels page on LeoFS site.

If you wonder, In our setup at vRocket, we utilize 3, 3, 2, 2 respectively.

1.2.2.2 Configuring LeoFS Gateway

Edit the /opt/local/leo_gateway/etc/leo_gateway.conf file, and make it look like so, replacing the distributed_cookie value with the one you actually generated in step 1.2.2 above.

distributed_cookie = uHFzR0MRB/wyPD/X
managers = [[email protected], [email protected]]   # names and IP's of our 2 LeoFS Manger Zones
http.port = 80
http.ssl_port = 443

1.2.2.3 Configuring LeoFS Storage Service

Edit the /opt/local/leo_storage/etc/leo_storage.conf and make it look like so, again, replacing the distributed_cookie value with the one you actually generated in step 1.2.2 above.

distributed_cookie = uHFzR0MRB/wyPD/X
managers = [[email protected], [email protected]]

1.2.3 LeoFS-02 Zone Configuration

This setup is quite similar to setting up LeoFS-01 Zone setup above, so instead of repeating everything in detail, I'll just show the code snippets this time around.

# login to the LeoFS-02 zone using it's UUID. Ours, as seen above, starts with b3f88ac2
# zlogin <leofs-02-uuid>

[root@node02 ~] zlogin b3f88ac2-033c-432c-ae74-81fde067db0f
[Connected to zone 'b3f88ac2-033c-432c-ae74-81fde067db0f' pts/2]
   __        .                   .
 _|  |_      | .-. .  . .-. :--. |-
|_    _|     ;|   ||  |(.-' |  | |
  |__|   `--'  `-' `;-| `-' '  ' `-'
                   /  ; Instance (minimal-64-lts 14.4.2)
                   `-'  https://docs.joyent.com/images/smartos/minimal
[root@b3f88ac2 ~]
# to make it easier to know where we are when we're in the zone
# let's set up a hostname that is a bit more human readable
[root@5caf83d3 ~] hostname leofs-02

# exit and then come back in, and your prompt will now show [root@leofs-02 ~]

# now that we're in, let's download a couple of things for pkgin
[root@leofs-02 ~] curl -O https://project-fifo.net/fifo.gpg
[root@leofs-02 ~] gpg --primary-keyring /opt/local/etc/gnupg/pkgsrc.gpg --import < fifo.gpg
[root@leofs-02 ~] gpg --keyring /opt/local/etc/gnupg/pkgsrc.gpg --fingerprint

# now we'll set a VERSION variable to "rel" which stands for "release" version of fifo
[root@leofs-02 ~] VERSION=rel

# then, let's back up our current repositories config file for a good measure
# and insert fifo's package repo path into it
[root@leofs-02 ~] cp /opt/local/etc/pkgin/repositories.conf /opt/local/etc/pkgin/repositories.conf.original
echo "http://release.project-fifo.net/pkg/${VERSION}" >> /opt/local/etc/pkgin/repositories.conf

# now it's time to update the package manager info
[root@leofs-02 ~] pkgin -fy up

# and install some core utils as well as leo manager, gateway and storage
[root@leofs-02 ~] pkgin install coreutils sudo gawk gsed nano
[root@leofs-02 ~] pkgin install leo_manager leo_gateway leo_storage

Remember:

Before we begin configuration, recall the distributed_cookie we generated above. In our setup it was uHFzR0MRB/wyPD/X so we'll use it below.

1.2.3.1 Configuring LeoFS Manager

Edit the /opt/local/leo_manager/etc/leo_manager.conf file first, and make it look like so, replacing the distributed_cookie value with the one you actually generated in step 1.2.2 above.

nodename = [email protected]            # this is this manager's name and IP address
distributed_cookie = uHFzR0MRB/wyPD/X       # common cookie we generated on first one
manager.partner = [email protected]     # partner is the other manager (LeoFS-01) previously setup

# VERY IMPORTANT - THESE CAN'T BE CHANGED LATER!!!
# NUMBERS MUST MATCH WHATEVER YOU SET ON LEOFS-01 !!!

# A number of replicas
consistency.num_of_replicas = 1            


# A number of replicas needed for a successful WRITE operation
consistency.write = 1

# A number of replicas needed for a successful READ operation
consistency.read = 1

# A number of replicas needed for a successful DELETE operation
consistency.delete = 1 

↳ Make sure to understand what your consistency level should be by studying the Consistency Levels page on LeoFS site.

1.2.3.2 Configuring LeoFS Gateway

Edit the /opt/local/leo_gateway/etc/leo_gateway.conf file, and make it look like so, replacing the distributed_cookie value with the one you actually generated in step 1.2.2 above.

distributed_cookie = uHFzR0MRB/wyPD/X
managers = [[email protected], [email protected]]   # names and IP's of our 2 LeoFS Manger Zones
http.port = 80
http.ssl_port = 443

1.2.3.3 Configuring LeoFS Storage Service

Edit the /opt/local/leo_storage/etc/leo_storage.conf and make it look like so, again, replacing the distributed_cookie value with the one you actually generated in step 1.2.2 above.

distributed_cookie = uHFzR0MRB/wyPD/X
managers = [[email protected], [email protected]]

1.2.4 Starting LeoFS Services

On both of our LeoFS Zones, first on 01 and then on 02, we'll start the LeoFS services, in specific order!

Please Note:

Startup order is VERY important and we'll use leofs-adm status command to show us management service is up on BOTH LeoFS Zones before continuing.

** 1.2.4.1 Starting LeoFS Managers**

On LeoFS-01:

# let's login to LeoFS-01 Zone first
[root@node01 ~] zlogin 5caf83d3-b5fe-43f3-8264-259476c93b59

# start LeoFS-01 Manager services
[root@leofs-01 ~] svcadm enable epmd
[root@leofs-01 ~] svcadm enable leofs/manager

# to check the LeoFS Service status run
[root@leofs-01 ~] leofs-adm status

On LeoFS-02:

# let's login to LeoFS-02 Zone next
[root@node01 ~] zlogin b3f88ac2-033c-432c-ae74-81fde067db0f

# start LeoFS-02 Manager services
[root@leofs-02 ~] svcadm enable epmd
[root@leofs-02 ~] svcadm enable leofs/manager

# to check the LeoFS Service status run
[root@leofs-02 ~] leofs-adm status

** 1.2.4.2 Starting LeoFS Gateways **

On LeoFS-01:

# assuming you're logged into LeoFS-01 zone
# start LeoFS-01 Gateway services
[root@leofs-01 ~] svcadm enable leofs/gateway

# to check the LeoFS Service status run
[root@leofs-01 ~] leofs-adm status

On LeoFS-02:

# assuming you're logged into LeoFS-02 zone
# start LeoFS-02 Gateway services
[root@leofs-02 ~] svcadm enable leofs/gateway

# to check the LeoFS Service status run
[root@leofs-02 ~] leofs-adm status

** 1.2.4.3 Starting LeoFS Storage Service **

On LeoFS-01:

# assuming you're logged into LeoFS-01 zone
# start LeoFS-01 Storage services
[root@leofs-01 ~] svcadm enable leofs/storage

# to check the LeoFS Service status run
[root@leofs-01 ~] leofs-adm status

On LeoFS-02:

# assuming you're logged into LeoFS-02 zone
# start LeoFS-02 Storage services
[root@leofs-02 ~] svcadm enable leofs/storage

# to check the LeoFS Service status run
[root@leofs-02 ~] leofs-adm status

Congratulations! You now have LeoFS Storage Ring set up! Running leofs-adm status should show something like this, after things have initialized.

Comments

  • As I am unable to actually upload second part of this tutorial, due to CloudFlare thinking I am trying to SQL inject or hack LET with all the code excerpts, please head to my docs website if interested in second part: http://docs.vrocket.io/display/SS/How+to+Install+Project-FiFo+SmartOS+Cloud+Orchestrator

  • Thanks for the sharing.

    Personally, I think project-fifo and leofs is cool, but the document is poor. Instead of how, I would like to know:

    • what is the goal and focus for these projects?
    • why they are better than other competitors.
    Thanked by 1deadbeef
  • vRocket_iovRocket_io Member
    edited October 2015

    @bookstack said:
    Thanks for the sharing.

    Personally, I think project-fifo and leofs is cool, but the document is poor. Instead of how, I would like to know:

    • what is the goal and focus for these projects?
    • why they are better than other competitors.

    Here's what SmartOS brings to the table as, arguably, the most feature-full hypervisor today:

    • It is an in-memory hypervisor (boots off of a USB drive or over network via PXE and lives in your machine's RAM)

      • Comes with ZFS File System (most robust and reliable File System today where data integrity is guaranteed)
      • Utilizes Zones for virtualization (most secure container-based virtualization technology available today)
      • OS Virtual Machines (Zones): A light-weight virtualization solution offering a complete and secure userland environment on a single global kernel, offering true bare metal performance and all the features illumos has
      • It also supports KVM Virtual Machines: A full virtualization solution for running a variety of guest OS's including Linux, Windows, *BSD, Plan9 and more using hardware emulation layer within a SmartOS Zone
      • Crossbow (full-stack network virtualization solution)

    I mean, it is a completely different stack and it being OS-level Type-1 hypervisor, it's just so much more lean and high-performing than anything around, it's ridiculous.

    You can read more on my blog: http://docs.vrocket.io/pages/viewrecentblogposts.action?key=SS

    As well as on vRocket's SmartVPS page: https://vrocket.io/virtual_private_servers.php

    As to why Project-Fifo, well you have 2 options when it comes to orchestrating SmartOS (if that's the hypervisor you actually go with) - 1) Joyent's SmartDatacenter (also OpenSource) and 2) Project-Fifo. There actually isn't anything else out there - unlike for Linux distros which have 1001 orchestrators available.

    Hope that explains some.

    Thanked by 1deadbeef
  • Hello Guys,
    Very new to SmartOS and I was wanting to set this up in my home lab, I got through this part okay, however http://docs.vrocket.io/display/SS/How+to+Install+Project-FiFo+SmartOS+Cloud+Orchestrator is a dead link. Where can I get the second part of this article so I can complete the configuration?

    Thanks,
    Michael

  • mcooper59mcooper59 Member
    edited August 2017

    Hello Again Guys,

    So here is my scenario, I have 4 servers and only 2 of them can used for this setup they are Dell SC23's with Intel Processors, 32 gb of ram each, with 2 - 500 gb SATA 7200 rpm drives. 2 nics one on a 192.168.x.x and one on a 192.168.xx.xxx subnets respectively. So I have so many questions but I will start with these first.

    So I installed SmartOS on the first node (cfSOS1), I am going to assume this should be considered the head node, am I correct? If so do I need a second node and how do I make it a compute node? Is the Global Zone assumed on the second node (ex: cfSOS2) or do I have to build it into the json file that cfSOS1 is the head node?

    Secondly: Can the LeoFS zones be built on the Head Node? If so do I use the same storage that the head node is on? What I mean by that is the head node took the c2t0d0 (example only) and used all of the storage when I installed it. Will the LeoFS Zones be able to use this or Do I need to add storage for them individually?

    I know these are probably stupid questions to you guys but I am not sure how to proceed here?

    Thank you for your time and patience,
    Michael

  • WSSWSS Member

    @mcooper59 said:
    Hello Guys,
    Very new to SmartOS and I was wanting to set this up in my home lab, I got through this part okay, however http://docs.vrocket.io/display/SS/How+to+Install+Project-FiFo+SmartOS+Cloud+Orchestrator is a dead link. Where can I get the second part of this article so I can complete the configuration?

    Thanks,
    Michael

    That article is from 2015. SmartOS internals have likely been altered since anything mentioned here. Not by a hell of alot, but enough to make it stop working.

    Wish I could help as I was a Solaris junkie, but the x86_64 platform just leaves a bad taste in my mouth.

Sign In or Register to comment.