Wednesday, August 9, 2017

Cloud Foundry and Kubernetes for Beginners

Cloud Foundry and Kubernetes are probably the most prominent technologies for cloud infrastructure development, they have a very different set of goals and as such they follow significantly different solution design approaches.

Cloud Foundry is a traditional Platform-as-a-Service technology, with a specific design orientation towards enterprise-scale resources and privileges management. It follows a top-down approach, where your primary component is a CF "cloud" instance. Within a CF cloud you create organizations and spaces, which are bound to resource quota plans. Quota plans include both computing (CPU/RAM/instances) resources and external services resources (e.g. database storage).

CF users are assigned to organizations and can deploy/monitor their applications based on their roles. There is a list of CF supported development languages/frameworks, called buildpacks . Developers/release managers can deploy and monitor their applications using the Cloud Foundry command line client. CF application instances run on Linux containers, on a CF platform you get the same level of scalability/isolation that you can find on most containerized application platforms.

CF does a clear distinction between applications and services, on CF parlance, a service is an abstract resource that can be instantiated and bound to applications. Services are available from a CF's service catalog «named marketplace per it's usual format on public clouds», example of CF services are object storage, SQL dbs, nonsql, bigdata / deep learning APIs, messaging, etc.

There are many CF powered PaaS providers, as an IBMer I am more familiar with IBM's offering, BlueMix . Bluemix provides a very large and diverse catalog of services, some which rely on IBM exclusive technology. In any case, Cloud Foundry is an open source project, which means you can deploy your own CF instance, exploring your existing infrastructure and adapting services per your requirements.

Kubernetes is an application container orchestration technology, with a specific design orientation towards application containers management and integration. It follows a bottom-up approach, where your primary component is the "pod", a "pod" is a group of one or more containers that can be deployed into a Kubernetes cluster. Pod's are most commonly composed using Docker images. There is no default organization structure in a Kubernetes cluster, in order to achieve resource control on an organization level you will need to setup Kubernetes namespaces with resource quotas and roles.

Developers/release managers (which can have namespaces bound roles) can deploy/monitor their container images. There is no Kubernetes specific list of images for application language/frameworks support, you will need to select/deploy/compose the pod with images bundling the required base O.S. image, SDK and applications.

Kubernetes does not have an explicit distinction between applications and services, a Kubernetes pod can be either an application fronted (e.g. nodejs) or a back-end (e.g. postgresql), or both. A Kubernetes service is as network level of abstraction, used to define a TCP service from the container that should be exposed externally.

There are many Kubernetes distributions and services providers, and there are also several PaaS solutions (e.g. RedHat's OpenShift) built on top of Kubernetes. IBM is also on the Kubernetes train on it's cloud platform. Kubernetes clusters are available as a CF service on BlueMix . Kubernetes is also an Open Source project, you can try it or build your own infrastructure.

Cloud Foundry is a platform-as-a-service platform, with an explicit organization structure and resource management control system, CF provides officially supported SDKs, services are available as a different level of abstraction, services instances can be created and bound to applications. CF application instances are run within a self-healing elastic containerized platform.

Kubernetes is container-orchestration platform, capable of running services on container based images (most commonly Docker). It provides the freedom and responsibility for running a wide range of components and services that are bundled into images. It provides optional resource control facilities. Kubernetes is a self-healing elastic containerized platform.

The best option between a CF app or a Kubernetes pod will depend a lot on the application requirements, team size, skills and other business requirements.

Wednesday, August 2, 2017

When m(IRC) and ANSI C were popular

It was 2005, I was amazed with internet chats, both as an user, and as a computer programming enthusiast. m(IRC), an Internet Relay Chat application was probably the most popular chat app. For some young people, at that time, being "online" was not merely about having an internet connection, it was about being online on "mIRC". Sometimes people actually scheduled to be "online" together. Having a continuous internet connection at home was still a luxury for many.

As many of the early internet services and related software, installing and managing an IRC network was a complex activity, as such, most IRC chat networks were managed by large university groups and internet service providers. This was where I got in, improving the server side software, making it easier and more flexible to use for every body.

IRC chat networks provided both chat rooms and private messaging. Users were identified by their chosen nickname, and the chat rooms «named IRC channels» had moderation features. IRC servers kept all is users and channels information only in memory. When servers were restarted all this information was lost To overcome this, many IRC networks implemented IRC "registration" services, these services worked as "robots" which assigned  control over nicknames and chat-rooms, keeping that data on a persistent database, there services were also frequently extended with extra features like offline messaging.

I believed that there was a great potential for more advanced IRC services, using web/mail integration features which I was not able to find in the existing software. That was when I decided to develop an IRC Services software from scratch. 

I didn't kept any record about the initial development time-line and I was not familiar with any open source version control system at the time. "PTlink IRC Services 3" was released around June 2005, containing around 20k lines of ANSI C code.

It featured a C library providing an event driven API for all the IRC server protocol handling. For example, for an "on connect" message service, you would only need to bind your C function to the NEW_USER event, and from your function, you would use the irc_SendNotice() to delivery a message.

Services were provided as a set of modules, these modules were implemented as shared object libraries that could be dynamic loaded/reloaded, this is something that you currently find on most software, modules/plugins support.

Last but not least, the data store back-end was MySQL, while most IRC services were still using file based custom formats. This also allowed the development of a minimal web interface.

In 2006 the development was halted, mostly because I lost ownership over the domain which was bound to the software, and due to the trending lose of popularity of IRC.

It is a bit sad when you spend some hundred hours of development, specially open source, and it gets into a dead end. Nevertheless developing an event driven C library, with a modular IRC services integrator was a very challenging and exciting personal experience.

Sunday, April 9, 2017


This weekend I have been playing with an open source project, the Processing language, which seems to be a great tool for computer graphics design/development, it provides and high level language, abstracting you from the more  l ow-level technical aspects of graphics programming.

I have been looking  on how to use it with Python as the development language, I have found two options:

  1. Python Mode for Processing which is a language supporte addon for Processing's IDE, it work as a wrapper for the processing java lib, it's based on jython,
  2. pyprocessing pypi package which is a regular python package, implementing the processing language using OpenGL and Pyglet for the renering

I preferred to go with the pypi package in order to be able to keep developing in a python ecosystem without java dependencies. Unfortunately I have found the pyprocessing development has been abandoned. I have found a github repository with the last commit from 5 years ago, and which actually fails to install/run because of a missing one line import statement.

So here I was again, forking yet another repository, because I found a project which I believe to have a great potential but for whatever reason is no longer maintained. Since this is becoming a common case, creating "keep it working" repositories, I gave it some more thought.

There are many abandoned, yet useful/interesting open source projects. It most commonly happens when projects are started/developed by a single person, sometimes just as prototypes, and at a given point in time the author loses the interest/capacity to maintain it. There is a lot of people with great technical skills/interest in software development, but very limited in community/team building. I have been there.

This is the reason why I have decided to start a new project, with the name "Adwaita" whose goal is to maintain open source projects vitality. The primary focus will be on keeping "poorly maintained" open source projects in better conditions to be adopted/driven by project specific communities.

In an "inception" kick-off style, Adwaita will be the first project managed by the Adwaita task force.

Saturday, February 18, 2017

Testing OpenSUSE Tumbleweed

It has been a long time since I have tested a new distro, so here I am again, Now trying openSUSE Tumbleweed. I never tried openSUSE for more than a few days, so hopefully this time I willl build my own oppinion. I am going for Tumbleweed, the rolling release, since I am the bleeding edge guy.


I have selected the NET install iso, because I have a decent internet connection, and I like to have a clean desktop, only installing software as needed. The iso is available from .

I have created a bootable usb with the following procedure:

Post Install Issues

After installing a 3rd SDD disk do my desktop computer 2 years ago, installing any OS resulted in a broken boot system. It was no different with Tumbleweed, I just ended on a GRUB2 "No such device". However it was quite easy to fix, OpenSUSE's media has a "Boot from installed system" which detects an existing install, and boot from it. It worked like a charm. Then having some technical background on the issue. With a full booted and functional system, I have installed GRUB to the MBR from all the 3 disks, and it was done. A reboot presented me a nice boot graphical logo to select the system (OpenSUSE Tumbleweed or Windows 10).

I have switched from Gnome (2.0) to Cinnamon for the last couple of years, unfortunately the installer does not have a Cinnamon option for the install type. I have selected "Minimal X Environment" so that I could install cinnamon from the repositories later, that got me into another issue. The Minimal X provided IceWM and YaST «a nice system config management tool», however while attempting to configure the WiFi network, I have found that the system was missing the core packages required for Wifi connectivity (iw; wpa_supplicant), it was kind of blocking since I don't have a wired network. I had to boot into Windows to fetch the packages from:

I have filled a bug report for this issue:

I am currently finishing this blog post from Tumbleweed, hopefully I will report about a more wide experience during next week :)

Thursday, August 25, 2016

Creating a portable Python + VSphere Python SDK for Windows

If you are working with Virtual Center in an enterprise environment, there are high changes that your VC clients are Windows systems running on a secure network (no network access). This article will let you build a portable Python extender the VSphere Python SDK that you can just copy and use from your vcenter client systems.

Get the required packages on a system having network access: 

  • lessmsi from 
  • python*.msi from
  • Python packages from pip:
    • six-1.10.0-py2.py3-none-any.whl, 
    • requests-2.10.0-py2.py3-none-any.whl
    • suds-0.4.tar.gz
    • pyvmomi- 

Create a .bat file that creates the python portable directory:

lessmsi-v1.4\lessmsi x python-2.7.12.amd64.msi python\
cd python\SourceDir
python -m ensurepip
scripts\pip install ../../six-1.10.0-py2.py3-none-any.whl
Scripts\pip install ../../requests-2.10.0-py2.py3-none-any.whl
Scripts\pip install ../../suds-0.4.tar.gz
Scripts\pip install ../../pyvmomi-

Run it, you will get your portable content in the "python" folder


Tuesday, November 1, 2011

More thinking on software bundles for Linux

The post Rethinking the Linux distibution made me revisit some of the ideas which I had in the past trying to address which in my opinion are major limitations in the current main packaging systems:
  • No support for multiple versions of the same software
  • No support for rollbacks
A software bundle composed of the application and all it's "non core" dependencies can  also add some other benefits:
  • Cross Linux distribution delivery
  • Fine grain control of libraries and options used by an application
  • Reduced complexity with the removal of dependencies management
The disadvantages are:
  • Increase on Disk and RAM usage from software containing different versions of common libraries
  • Applying security fixes  on dependencies requires re-creating/re-distributing every affected bundle
Possible approach for implementation:

Adapt an existing source base build system like Arch's "makepkg", with the following changes:
  • Run time prefix must be set to /opt/bbundle
  • Build definitions for dependencies must be contained in the master build definition (this will lead to build definitions redundancy across bundles but will remove the risk of breaking builds by sharing dependencies building rules) 

The bundle file format should be a commonly used archive format, since tar does not provide indexing, .zip is a better option. Having an indexed archive will allow to reduce download sizes by inspecting the bundle  contents prior to the download and skipping the download for common files found at installed bundles. In order to save on-disk space, the bundle installer should check for identical files across bundles and use hard links instead of duplicating files.

Bundles installation should be as simply as extracting the bundle archive into /usr/local/bbundle/bundle_name. A watching service must identify .desktop and other exportable resources and make them available from the host desktop environment.

Thursday, May 12, 2011

Life changes

It has been more than 1 month since I left the Ubuntu community and started exploring other Linux distros. Meanwhile the increasing personal concerns added to the my country's economical situation prompted to re-evaluate where I am investing my life time.

In short, I need to engage in profitable activities otherwise I may fail to support my family.

While I will always be a strong FOSS supporter, because I believe on it's values. I no longer have the time for significant involvement in non profitable projects.
I will continue using and consequentially involved in the Linux ecosystem because it allows me to be more productive, both on my job and on other projects I may get involved with.