Sunday, April 9, 2017

Adwaita

This weekend I have been playing with an open source project, the Processing language, which seems to be a great tool for computer graphics design/development, it provides and high level language, abstracting you from the more  l ow-level technical aspects of graphics programming.

I have been looking  on how to use it with Python as the development language, I have found two options:

  1. Python Mode for Processing which is a language supporte addon for Processing's IDE, it work as a wrapper for the processing java lib, it's based on jython,
  2. pyprocessing pypi package which is a regular python package, implementing the processing language using OpenGL and Pyglet for the renering


I preferred to go with the pypi package in order to be able to keep developing in a python ecosystem without java dependencies. Unfortunately I have found the pyprocessing development has been abandoned. I have found a github repository with the last commit from 5 years ago, and which actually fails to install/run because of a missing one line import statement.

So here I was again, forking yet another repository, because I found a project which I believe to have a great potential but for whatever reason is no longer maintained. Since this is becoming a common case, creating "keep it working" repositories, I gave it some more thought.

There are many abandoned, yet useful/interesting open source projects. It most commonly happens when projects are started/developed by a single person, sometimes just as prototypes, and at a given point in time the author loses the interest/capacity to maintain it. There is a lot of people with great technical skills/interest in software development, but very limited in community/team building. I have been there.

This is the reason why I have decided to start a new project, with the name "Adwaita" whose goal is to maintain open source projects vitality. The primary focus will be on keeping "poorly maintained" open source projects in better conditions to be adopted/driven by project specific communities.

In an "inception" kick-off style, Adwaita will be the first project managed by the Adwaita task force.

Saturday, February 18, 2017

Testing OpenSUSE Tumbleweed

It has been a long time since I have tested a new distro, so here I am again, Now trying openSUSE Tumbleweed. I never tried openSUSE for more than a few days, so hopefully this time I willl build my own oppinion. I am going for Tumbleweed, the rolling release, since I am the bleeding edge guy.

Install

I have selected the NET install iso, because I have a decent internet connection, and I like to have a clean desktop, only installing software as needed. The iso is available from http://download.opensuse.org/tumbleweed/iso/ .

I have created a bootable usb with the following procedure:
https://en.opensuse.org/SDB:Create_a_Live_USB_stick_using_Windows#Using_ImageUSB

Post Install Issues

After installing a 3rd SDD disk do my desktop computer 2 years ago, installing any OS resulted in a broken boot system. It was no different with Tumbleweed, I just ended on a GRUB2 "No such device". However it was quite easy to fix, OpenSUSE's media has a "Boot from installed system" which detects an existing install, and boot from it. It worked like a charm. Then having some technical background on the issue. With a full booted and functional system, I have installed GRUB to the MBR from all the 3 disks, and it was done. A reboot presented me a nice boot graphical logo to select the system (OpenSUSE Tumbleweed or Windows 10).

I have switched from Gnome (2.0) to Cinnamon for the last couple of years, unfortunately the installer does not have a Cinnamon option for the install type. I have selected "Minimal X Environment" so that I could install cinnamon from the repositories later, that got me into another issue. The Minimal X provided IceWM and YaST «a nice system config management tool», however while attempting to configure the WiFi network, I have found that the system was missing the core packages required for Wifi connectivity (iw; wpa_supplicant), it was kind of blocking since I don't have a wired network. I had to boot into Windows to fetch the packages from:

I have filled a bug report for this issue:

I am currently finishing this blog post from Tumbleweed, hopefully I will report about a more wide experience during next week :)



Thursday, August 25, 2016

Creating a portable Python + VSphere Python SDK for Windows

If you are working with Virtual Center in an enterprise environment, there are high changes that your VC clients are Windows systems running on a secure network (no network access). This article will let you build a portable Python extender the VSphere Python SDK that you can just copy and use from your vcenter client systems.
  

Get the required packages on a system having network access: 

  • lessmsi from https://github.com/activescott/lessmsi/releases/latest 
  • python*.msi from https://www.python.org/downloads/windows/
  • Python packages from pip:
    • six-1.10.0-py2.py3-none-any.whl, 
    • requests-2.10.0-py2.py3-none-any.whl
    • suds-0.4.tar.gz
    • pyvmomi-6.0.0.2016.6.tar.gz 

Create a .bat file that creates the python portable directory:

lessmsi-v1.4\lessmsi x python-2.7.12.amd64.msi python\
cd python\SourceDir
python -m ensurepip
scripts\pip install ../../six-1.10.0-py2.py3-none-any.whl
Scripts\pip install ../../requests-2.10.0-py2.py3-none-any.whl
Scripts\pip install ../../suds-0.4.tar.gz
Scripts\pip install ../../pyvmomi-6.0.0.2016.6.tar.gz

Run it, you will get your portable content in the "python" folder

 

Tuesday, November 1, 2011

More thinking on software bundles for Linux

The post Rethinking the Linux distibution made me revisit some of the ideas which I had in the past trying to address which in my opinion are major limitations in the current main packaging systems:
  • No support for multiple versions of the same software
  • No support for rollbacks
A software bundle composed of the application and all it's "non core" dependencies can  also add some other benefits:
  • Cross Linux distribution delivery
  • Fine grain control of libraries and options used by an application
  • Reduced complexity with the removal of dependencies management
The disadvantages are:
  • Increase on Disk and RAM usage from software containing different versions of common libraries
  • Applying security fixes  on dependencies requires re-creating/re-distributing every affected bundle
Possible approach for implementation:

Compiling
Adapt an existing source base build system like Arch's "makepkg", with the following changes:
  • Run time prefix must be set to /opt/bbundle
  • Build definitions for dependencies must be contained in the master build definition (this will lead to build definitions redundancy across bundles but will remove the risk of breaking builds by sharing dependencies building rules) 

Bundling
The bundle file format should be a commonly used archive format, since tar does not provide indexing, .zip is a better option. Having an indexed archive will allow to reduce download sizes by inspecting the bundle  contents prior to the download and skipping the download for common files found at installed bundles. In order to save on-disk space, the bundle installer should check for identical files across bundles and use hard links instead of duplicating files.

Installing
Bundles installation should be as simply as extracting the bundle archive into /usr/local/bbundle/bundle_name. A watching service must identify .desktop and other exportable resources and make them available from the host desktop environment.

Thursday, May 12, 2011

Life changes

It has been more than 1 month since I left the Ubuntu community and started exploring other Linux distros. Meanwhile the increasing personal concerns added to the my country's economical situation prompted to re-evaluate where I am investing my life time.

In short, I need to engage in profitable activities otherwise I may fail to support my family.

While I will always be a strong FOSS supporter, because I believe on it's values. I no longer have the time for significant involvement in non profitable projects.
I will continue using and consequentially involved in the Linux ecosystem because it allows me to be more productive, both on my job and on other projects I may get involved with.

Monday, March 14, 2011

Building RPMs vs Building DEBs

Having a large experience with Debian package building it's refreshing to try something else. Last week I have learned the basics of RPM package building.
Here are the differences I have found so far and my opinions about them.

The RPM .spec file package contains both package metadata (description, dependencies, etc) and compile rules while on DEB you have the data split into different files. I remember that on the beginning it was hard to understand the purpose of all those debian/* files, I have found .spec files easier to understand.

The RPM .specs allows conditional building, during build the build target release can be used to dynamically adjust build flags, dependencies etc. While you can achieve this on Debian using some auto generation mechanism (debian/control.in), it is not naturally integrated in the building system, debian/* contains metadata and rules for a specific target system.

Not so important but a nice feature it's the support for translated description/summary on RPM packages.

Saturday, March 5, 2011

Some differences between Fedora and Ubuntu

I have already noted a few technical differences between Fedora and Ubuntu which I am going to comment.

/tmp cleanup
Fedora does not automatically remove /tmp contents on reboot, I prefer Ubuntu's behavior, applications do not rely on tmp contents across reboots and regular users should not be working on /tmp. If there is no other automated cleanup mechanism -did not check yet-, on the long run the user will get a full root file system.

Repository information cache
I don’t have a YUM technical background, so please excuse me if I will write something terribly wrong here.
From an user perspective I have noted that yum does not have an explicit cache mechanism, you don’t need to explicitly update the cache. The good side it automatically gets the required information when a new repository is added and frees the user from a repetitive action. The bad side is that it may introduce some network/time overhead during package management opertions.

Software update policy
I did not read Fedora’s update policy yet but I have noted that they provide regular release upgrades for some software, piding was updated to 2.7.10 from the regular updates repository.