Monthly Archives: November 2012

Gitlab upgrade from 2.x.x to 3.1.0

Just some quick notes on Gitlab upgrade I did today. Some serious issues arose, due to me not being careful. Somehow part of the users managed to enter SSH keys while upgrade was taking place. Still don’t understand how, but some of the keys ended up having invalid content in /home/git/.ssh/authorized_keys and in /home/git/.gitolite/keys/[name of the invalid key]. This manifested in a very strange manner:

  • a part of the users had no issues using Gitlab
  • and others couldn’t clone or push to projects they have been given access to in Gitlab web

After loosing myself for quite a bit, finally this Gitlab issue led me in the right direction. After comparing keys in database and in above two locations, I found the differences and deleted the surplus keys and also invalid ones. I seems that Gitlab or Gitolite probably “loop”  through keys somewhere and this loop breaks for users below the invalid keys, or works OK for those above.

And a short recommendation for future reference: backup everything !!!

References:

https://github.com/gitlabhq/gitlabhq/wiki/From-2.9-to-3.0

https://github.com/gitlabhq/gitlabhq/wiki/Update-gitolite

https://github.com/gitlabhq/gitlabhq/blob/stable/doc/install/installation.md (latest setup instructions)

http://sitaramc.github.com/gitolite/install.html#migr (migrating Gitolite)

Advertisements
Tagged ,

SOA ROAR

I’ve been toying with the SOA idea in Ruby workspace lately. Part of it comes from reading Service-oriented design with Ruby & Rails by Paul Dix and part from watching that, by now famous, Uncle Bob’s talk Architecture the Lost Years. What really got me interested was the idea to package domain logic separately from the persistence and delivery mechanisms. At that time I also ran into roar and roar-rails gems from Nick Sutterer. All of this resulted in a small thought experiment and an accompanying Github repo.

I imagined to be a fruit shake maker 🙂 The entire ecosystem consists of the following components:

  • Book of orcharding – contains all the domain / business rules about fruit management
  • Orchard – serves us as the persistence layer for the orcharding rules
  • Tutti frutti – exposes orcharding rules as a json api
  • Smoothie mixer – rails project that consumes the api and delivers delishes fruit smoothies

At some later point in time, I was thinking about adding some other kind of client, e.g. command line or android/ios app. These clients would preferably use the same business gem for main logic and json client gem to communicate with the json api. It is a work in progress, hope you like it 🙂 And kudos to Nick for the beautiful gems!

Tagged , , ,

Running specs without Rails

For most of the Rails work I do, I prefer to run specs without Rails as much as possible. The core of the system is usually in PORO’s so there is no need to wait for Rails environment to load. The idea is picked up from the web, of curse, but with a little twist. In all the PORO’s I reference the spec_helper_without_rails.rb, with the below configuration. Note that some part were left out, because they are not relevant to this topic, but the important part is that Rails is not referenced at all.

Now, to run those specs, one recommendation is to run it with RSpec tags. What bothered me is that Rails environment is loaded after all, because RSpec seems to load all specs before filtering them out. This resulted in PORO specs being executed / filtered out correctly, but the load time was slow as before. The trick eventually is to run RSpec only for those specs that use the above helper without Rails. And for that I created a shell/bash script that does just that:

Now, the PORO specs are fast again (~3000 per second on my machine) and life is beautiful again too 🙂

Tagged , ,

Jenkins & Git branches

Jenkins CI is well known open source continuous integration server, and a damn good one in my opinion. I guess the biggest issue is to get to know all the plugins available, a fun time indeed 🙂 Anyways, since the switch to Git/Gitlab, I needed a setup that would enable the team to use the CI environment in full. The idea was to allow CI environment to build all the branches, not just master (release, develop, whichever is your flavor of the day) branches. Manually setting up Jenkins projects for all the Git branches was out of question.

A little background first. The projects are mainly Java / Maven and there are a lot of dependencies. One rule that is most important for Jenkins environment was that the developers need to keep their tasks / features in separate Git branches. This would prevent clashes between developers, but still allow them to work through the entire stack if needed. Jenkins was to be used as continuous feature testing environment so all those branches had to find their place in the CI stack too.

Ideally, the solution was to satisfy the following requirements:

  • a single branch that spans several projects should be built and referenced correctly (Maven dependencies)
  • only master (develop) and releases branches are to be published on Artifactory, (short lived) features should not fill it up
  • preferably a single Jenkins project, because of:
    • resources when building on the same machine (e.g. building several branches for the same base project)
    • no need to clutter the views
    • build history should show the branch built
  • should be able to build the entire feature stack across different slave nodes

So, we introduced the parameterized builds. Each Git project must have a single Jenkins project that is configured like this:

  • This build is parameterized checked, with a single parameter “BRANCH_NAME_TO_BUILD”, and the default value “master” (or “develop” or what ever you use).

  • Block build when upstream project is building checked – this is not really related to this workflow but is a good practice nevertheless, prevents building projects while dependencies are built
  • Git repositories and branches set to track the Git repository, and to build branch from the above parameters setup. All using Git plugin.

  • Deploy artifacts to Artifactory set to filter out snapshots

  • Deploy artifacts to Maven repository

A specific thing about the setup, that might not work for you, is the Artifactory / Maven part. The policy is that only release artifacts are allowed in Artifactory. This reduces the noise and keeps the Artifactory slick (it is getting rather big anyway). The problem is with building dependencies for feature branches that are always snapshots. For this to work, and to be able to use different nodes, you still need something like Artifactory. If building on a single node, you can just put “install” as a Maven goal and you’d get the dependency on that computer. For multiple nodes, one idea was to tell all the nodes about each others maven repositories, but that seamed like too much maintenance.

So, the workaround was to create a webdav Apache folder that Jenkins could use to put all builds into. And, that same repository was referenced in Maven settings on each of the nodes, and on all of the developers machines too. This enabled Jenkins to “know” about all the feature branches artifacts, while not putting too much strain on Artifactory. And, that Maven repository can be cleaned periodically without peril.

This setup is pretty much it. With it you get to build a specified branch at any time. The feature branches build nicely throughout the entire stack and you have a single project on Jenkins that prevents concurrent builds of the same code base (so resource issue is no longer valid).

Still, nothing is perfect and there are a few gotchas:

  • if you are using some wall plugin, you can’t really tell the status of the project since they show only the last build status, which can be feature or not – this can be a good thing if you decide to treat broken feature builds as a bad thing 🙂
  • you get to build only one branch at the time, for many commits on the same project, you could wait a long time for Jenkins to build your commit
  • Jenkins can’t really decide correctly on Maven dependencies since at one point in time a project might reference some feature snapshot (from pom.xml) whilst at other time it might reference the release version on the same project

The last point is the most problematic one. It will prevent Maven to correctly compute project dependencies which will directly influence the upstream/downstream build triggers in Jenkins. You’ll probably have to manually start builds for such situations or just commit changes again. If developers in your team are a bit disciplined, this might not be an issue after all. A nice idea on how to avoid this problem is to create a separate Jenkins project for each branch, automatically as noted here. I am in the process of enabling this support for Gitlab Jenkins plugin so stay tuned 🙂

Tagged , , ,

Gitlabhq +1

About a few months ago, I had the opportunity to tryout and decide on the Git ecosystem. Out of the few existing solutions, like Gitorius and others, I finally settled on Gitlab. The setup procedure was pretty straight, although not as automated or easy as I would like it to be. Some work is done to speed up the process but for those in need or the impatient ones, you can always follow the default instructions, and apply additional recipes where needed.

Since then, the Gitlab web interface and the entire ecosystem kept running like a charm, no glitches and everyone is pretty much happy with it.

I’ve personally chosen to set it up on CentOS, for which there is a good guide, and in the above recipes you can find specifics for the given OS. Ruby compiling is what actually took the most time, but with the mentioned resources you shouldn’t have a hard time.

The only issue left is the project <=> user relation. Using LDAP for the domain , it integrates nicely with Gitlab, authentication wise. Unfortunately, there is no way to automatically add projects to the new user based on some rule. A relation between LDAP groups and projects would be nice. I read that in the latest release there is support for project groups, so maybe this will solve the issue, have to try it out soon. For now, we settled on adding all projects to all user. A rake task is used for this:

desc "Add all users to all projects (admin users are added as masters)"
task :add_users_to_project_teams => :environment do |t, args|
  user_ids = User.where(:admin => false).pluck(:id)
  admin_ids = User.where(:admin => true).pluck(:id)

  Project.find_each do |project|
    puts "Importing #{user_ids.size} users into #{project.code}"
    UsersProject.bulk_import(project, user_ids, UsersProject::DEVELOPER)
    puts "Importing #{admin_ids.size} admins into #{project.code}"
    UsersProject.bulk_import(project, admin_ids, UsersProject::MASTER)
  end
end

desc "Add user to as a developer to all projects"
task :add_user_to_project_teams, [:email] => :environment do |t, args|
  user = User.find_by_email args.email
  project_ids = Project.pluck(:id)
  UsersProject.user_bulk_import(user, project_ids, UsersProject::DEVELOPER)
end
Tagged , ,