Skip to content

OpenShift v3 quickstart#2108

Merged
dsander merged 4 commits intohuginn:masterfrom
baip:openshift
Sep 16, 2017
Merged

OpenShift v3 quickstart#2108
dsander merged 4 commits intohuginn:masterfrom
baip:openshift

Conversation

@baip
Copy link
Contributor

@baip baip commented Sep 3, 2017

These commits still contain the deployment scripts for the previous version. With the quickstart for v3, the user experience should be much better. BTW, when I tried to use the docker image on Openshift, I got a permission error, presumably because it doesn't run container under root user.

See #246, #1388

@baip baip force-pushed the openshift branch 6 times, most recently from 60c99ba to a76e172 Compare September 4, 2017 03:04
@baip
Copy link
Contributor Author

baip commented Sep 4, 2017

Could someone please help me understand why the last three tests might have failed? I don't know enough about what the tests are designed to exercise, but the files I added or modified were all in their separate directories; I can delete them entirely and run Huginn without a problem.

@dsander
Copy link
Collaborator

dsander commented Sep 5, 2017

Hi @baip,

thanks for all that work, the spec failures are not related to you changes. Sadly we do have some flaky specs recently.

Is there any way to have a simpler deployment to OpenShift? The amount of code we would need to maintain scares me a bit. Especially compared to Heroku it looks like a lot of boiler plate that every project needs to duplicate. If our docker container would not run as root could the docker image be used to simplify the deployment process?

@baip
Copy link
Contributor Author

baip commented Sep 5, 2017

@dsander It's actually not that bad: only files in openshift/templates, .s2i/bin, and .openshift/action_hooks/setup_env; other files in .openshift are left over from the OpenShift v2. I just wish to keep the scripts recorded somewhere because they would still provide a good example of automated Linux deployment. I can delete them right before the last commit -- would that be more preferable?

If we ask user to run something like a new bin/setup_openshift script, the deployment can be much simpler. The complexity mostly arises from the template files AND allowing the possibility of changing DATABASE_ADAPTER after deployment (so have to provide modified build scripts to use the same hack Dockerfile uses). But providing template files makes it possible for users to do a one-liner deploy or even entirely on the web console. I can slim down the scripts in .s2i/bin though.

@baip baip force-pushed the openshift branch 2 times, most recently from 1d58f63 to 6c5e843 Compare September 6, 2017 06:14
@dsander
Copy link
Collaborator

dsander commented Sep 6, 2017

Thanks it indeed looks a lot simpler now.

The complexity mostly arises from the template files AND allowing the possibility of changing DATABASE_ADAPTER after deployment (so have to provide modified build scripts to use the same hack Dockerfile uses).

I don't know much about how OpenShift works. Is it kind of a mixture of deploying docker containers and how Heroku works? In the Dockerfile we only have those hacks to precompile the assets when we build the image. Since Rails requires a database connection to do that we need to add the sqlite gem. Heroku does not require the hack because the asset precompilation is done on every deploy (which is kind of the standard for Rails applications). Switching the database adapter should not require any hacks as long as bundle install runs on every deploy.

@baip
Copy link
Contributor Author

baip commented Sep 6, 2017

I think OpenShift now runs Docker containers, while Heroku custom built their own container solution based on LXC. Besides running Docker images directly, OpenShift has something called source-to-image, which uses a language runtime image with scripts that can build user code into a new image and start the application when deployed. So this operation mode is very similar to Heroku. We can also describe the entire setup in a template file, but when using these quickstart templates, the database is being deployed at the same time as the application itself and may not be ready for Huginn to use, which is why I had to modify the builder scripts to use sqlite3 temporarily. If a user/setup script manually deploys the database image first, and then the application with all the DATABASE_* environment variables set up properly, there would be no need for any of the custom scripts or quickstart templates.

@dsander
Copy link
Collaborator

dsander commented Sep 7, 2017

the database is being deployed at the same time as the application itself and may not be ready for Huginn to use, which is why I had to modify the builder scripts to use sqlite3 temporarily.

I see so we could do the asset precompilation in the run script which would allow us to get rid of the assemble script?

When #2112 is merged, can we use one of the docker images for OpenShift? To me it looks like most of the logic that is done in both scripts is already handled by the docker containers.

@baip
Copy link
Contributor Author

baip commented Sep 8, 2017

Yes, but the pod restart will then take significantly longer; the run script does "database:migratebut notbundle install`, so it's quite fast right now. Docker images should work, but then one loses the ability to play with the code and still easily deploy.

@dsander
Copy link
Collaborator

dsander commented Sep 8, 2017

I played around with OpenShift a bit and noticed their stock build image for ruby worked as long as I set ON_HEROKU=true in the build environment variables. Not sure how and why (maybe something in Rails changed) but the asset precompilation step does not seem to require a database connection anymore. Maybe that simplifies things a bit?

My build environment config:

                    {
                        "name": "ON_HEROKU",
                        "value": "true"
                    },
                    {
                        "name": "LC_ALL",
                        "value": "en_US.UTF-8"
                    },
                    {
                        "name": "LANG",
                        "value": "en_US.UTF-8"
                    },
                    {
                        "name": "APP_SECRET_TOKEN",
                        "value": "xxx"
                    }

@baip
Copy link
Contributor Author

baip commented Sep 9, 2017

Wow, you're right. I had actually tried adding ON_HEROKU=TRUE, but only after I started tinkering with customizing the assemble script to set up .env. I realized that ON_HEROKU sets up several defaults if the relevant environment variables are not set themselves, but didn't know that it also skips the dotenv file altogether. I wish I had consulted you first....

The only other thing I needed for build was `DATABASE_ADAPTER=mysql2"; otherwise deploy would have problems.

Copy link
Collaborator

@dsander dsander left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! The configuration looks very clean now.

.s2i/bin/run Outdated

export RACK_ENV=${RACK_ENV:-"production"}

exec bundle exec "unicorn -c ./deployment/heroku/unicorn.rb --listen 0.0.0.0:8080"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we move executed command to an optional environment variable? The Heroku unicorn configuration runs the background jobs in a thread within the unicorn server which is fine for getting started but does not allow scaling. As I understand how Openshift works one could then set WORKER_CMD to unicorn --listen 0.0.0.0:8080 and bin/threaded.rb in two different pods.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dsander Good idea -- it's done.

fi

# Configure the unicorn server
mv config/unicorn.rb.example config/unicorn.rb
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I overlooked those lines, unicorn.rb isn't used so I think we can remove this 4 files.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since now user can specify whether a pod is web or worker via WORKER_CMD, they can do unicorn -c config/unicorn.rb --listen 0.0.0.0:8080 if this file is ready to serve.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that makes sense.

Copy link
Collaborator

@dsander dsander left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work @baip! I was very skeptical at first but the latest version is very clean and easy to understand (maybe apart from the template JSON, but I am sure one can read about it in the OpenShift documentation).

I am merging this soon if nobody else has any complains or remarks.

fi

# Configure the unicorn server
mv config/unicorn.rb.example config/unicorn.rb
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that makes sense.

@dsander
Copy link
Collaborator

dsander commented Sep 16, 2017

Thanks a lot @baip!

@dsander dsander merged commit 5ee7c35 into huginn:master Sep 16, 2017
@baip baip deleted the openshift branch September 17, 2017 04:21
@baip
Copy link
Contributor Author

baip commented Sep 17, 2017

Glad to see it merged. Thanks!

@cantino
Copy link
Member

cantino commented Sep 18, 2017

Nice @baip!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants