Has finding remote work changed?

Some 12 years ago, my best friend and I set out on a mission to build a company that would offer premium linux system administration services and support to companies around the world. We were young, enthusiastic, with sharp skills and solid experience, the industry was blooming, and we just needed clients. We knew that our location (Serbia) could be an issue, but also we knew about websites that are connecting clients and engineers (elance, rentacoder.. and so on). Each of us has worked with few such clients in the past on short-term contracts, we knew how much money was in play, and success seemed inevitable.

Few months later, we declared failure. After numerous bids, we could not get a single project to work on remotely, even though we were sure that we’re in top 5% when it comes to knowledge and quality of work. Main reason for our failure were the sub-optimal platforms we were using to find work. Each freelancer could put on his/her profile whatever they want. Nobody could verify it and there was no screening of any sort. Employers would select bidders by lowest price and previous success score on the platform, making it almost impossible for newcomers to get any real work. Another issue was finding projects on those lists that are actually worth working on. Categorization was awful. When you filtered by “linux administration”, you would get results ranging from “homework help for $2” to “robot machine that does surgery on humans and runs on linux, for $1500+”. It took hours to filter through those offerings.

Anyway, we figured it wasn’t worth spending time and waiting for Santa to visit, and we just moved on with our lives, finding regular (and less payed) jobs. Since then, we’ve all had much success which I’m happy for, but the experience with finding remote work, left a bitter taste in me.

Fast forward 12 years, 6-7 months ago, I’m at the office and got a senior Rails developer visiting and looking for gig. As I’m interviewing him and listening on his previous work experience, he starts talking about TopTal and projects he worked on through them. As I haven’t heard of them yet, I asked him what TopTal is. He quickly explains: “They help us freelancers find remote work that pays well. Regular projects that are cool and worth working on.”. I almost immediately wanted to check it out, but then he continued, “you just have to be selected and invited as a top talent”. I sighed. Later on I checked the website, and I was sure it’s just another “elance”, that tries to attract freelancers by exclusivity (aka how facebook started).

Last Sunday, I’m at the bar with my friends from early days, having beers and telling “war stories of geeks”. We’re all successful now, but still we regret that we didn’t make major success when the opportunity for us was there. That night when I got home, I took another look at TopTal. I searched “toptal linux”, went to their page and there was a pleasant surprise! From their page: “marketplace for top Linux developers, engineers, programmers, coders, architects, and consultants. Top companies and start-ups choose Toptal Linux freelancers for their mission critical software projects.”, accompanied by a respectful list of companies. Isn’t that EXACTLY what we all want? Needless to say, I went on and submitted my resume.

Oh, resume? Not really. It’s just a small piece of many questions they asked me. And I must say, those were REAL questions and I actually wanted to be asked 12 years ago. How cool is that?

Anyone had experience working with them?

Posted in General | Leave a comment

Linux NVidia Optimus on ThinkPad W520/W530 with external monitor – finally solved

How NVidia Optimus works on Thinkpad W520/W530 laptops

Thinkpads (and probably some other laptops) with nvidia quadro 1000M or 2000M are wired so that intel integrated graphics chip does all the rendering and display for built in LCD screen, while all external output (VGA/DisplayPort) ports are wired through NVIDIA, which can also be fired up on demand to render 3d stuff. So what’s the problem?

Basically, in order to have external monitor connected to DisplayPort or VGA, we have two options:

  1. Go to BIOS -> Config -> Display, and set graphics to “Discrete Only”. This will make NVIDIA your primary graphic card, which with proprietary nvidia drivers will make your external monitor to work. However, this also means your battery life will suck. In my case, I’ve got 60-70% decrease in battery life in such setup, so it was a no-go.
  2. As of few weeks now, I have a complete working solution, which does not eat your battery when you’re unplugged, does not require you to reset X or computer when you want to connect/disconnect or any of such inconveniences, gives you ability to connect one or two external monitors to your laptop, and it’s relatively easy to setup.
Needless to say, this blog post focuses on #2.


How to get there?

Groundwork steps:

  1. Go to BIOS -> Config -> Display, and select “NVidia Optimus”, and make sure to enable “Optimus OS Detection”.
  2. Boot into your linux and login
You will need latest NVIDIA drivers installed. At the time of writing, version is 331.20. On ubuntu 13.10, it looks like this:
sudo add-apt-repository ppa:xorg-edgers/ppa 
sudo apt-get update 
sudo apt-get install nvidia-331

Now we need to install bumblebee:

sudo add-apt-repository ppa:bumblebee/stable
sudo apt-get install bumblebee bumblebee-nvidia bbswitch-dkms

At this point, I recommend reboot.

Configure bumblebee

Few more things are needed in order to get this running, and I’ll cover it now. First, you’ll need to edit /etc/bumblebee/bumblebee.conf and find and change these params, so they look like:

PMMethod=none (find this one in two locations in the file)

Next, edit /etc/bumblebee/xorg.conf.nvidia and make it look like this:

Section "ServerLayout"
    Identifier    "Layout0"

Section "Device"
    Identifier    "DiscreteNvidia"
    Driver        "nvidia"
    VendorName    "NVIDIA Corporation"
    BusID         "PCI:01:00:0"
    Option        "ProbeAllGpus" "false"
    Option        "NoLogo" "true"

Intel-virtual-output tool

First, you will need latest xf86-video-intel driver installed (2.99). Ubuntu 13.10 comes with it, so you don’t need to update driver in that case. However, what made all of this possible is the latest release of intel-virtual-output tool, which comes bundled with xf86-video-intel driver source. But, ubuntu’s package does not bundle it, and we need to compile it from source. One MAJOR thing to note here is: DO NOT compile it from ubuntu’s deb-src package. That package is old, and current release has some major fixes for the tool that we will actually need in order to have everything working properly. So lets do it:

sudo apt-get install xorg-dev git autoconf automake libtool xorg-utils-dev
git clone git://anongit.freedesktop.org/xorg/driver/xf86-video-intel cd xf86-video-intel 
cd tools
sudo cp intel-virtual-output /usr/bin/ 
sudo chmod +x /usr/bin/intel-virtual-output

Oh, and now that precious moment we’ve all been waiting for

Now, connect your external monitor to VGA or DisplayPort, and run this:

modprobe bbswitch
optirun true

And you’re done! What the above two commands did is, they fired up nvidia card in the background so that we can use its external ports or rendering, and started another X server in the background which runs on nvidia card. However, all your apps are still rendered via Intel card, but can be proxied to external monitor. Just open up KDE System Settings -> Display and Monitor, and you’ll see 2 monitors as you normally would, and you can place them in any position you like. Same goes for Unity’s settings.

You might notice a small lag here and there (nothing of major importance), but that’s been worked on and future kernel and driver releases will improve that situation.

Wanna go mobile and turn off the nvidia card? No problem.

Now that you’ve enjoyed your static setup, it’s time to go mobile without draining the battery. These are the simple steps to do so:

  1. Disconnect your external monitor
  2. # kill the second X server. 
    # To find the process, run: ps ax | grep Xorg
    # You should see something like this
    $ ps ax | grep Xorg
    3342 ?        Ss    68:08 Xorg :8 -config /etc/bumblebee/xorg.conf.nvidia -configdir /etc/bumblebee/xorg.conf.d -sharevts -nolisten tcp -noreset -verbose 3 -isolateDevice PCI:01:00:0 -modulepath /usr/lib/nvidia-331/xorg,/usr/lib/xorg/modules 
    # now kill the process 
    $ sudo kill -15 3342
  3. # Now you need to turn off nvidia card completely.
    sudo rmmod nvidia
    sudo tee /proc/acpi/bbswitch <<<OFF

That’s it. I hope I made your day :)

Posted in General, Linux Desktop | Tagged , , , , , , , , , , | 49 Comments


I’ve been playing around with Go lately, and I found that every time I’m about to write some small web app, I have to “prepare” my environment for all the standard things I believe we all use. So I decided to make a lightweight ready-to-use framework which can be just cloned from github, configured in human-readable text config file and fired up. Well almost. One would obviously need to write some code for app to execute, so I tried to organize it in a way like Django does it with python, only without models part. I hate models :) There’s a handlers.go example file (like views.py in Django) which is where all the action should happen. I tried to give a few basic usage examples within the example file, so it should be straightforward for everyone to pick up.

One of the key goals is to let coders do their work in plain go (or with 1st/3rd party packages). I didn’t want to have bunch of APIs around core infrastructure and force coders to learn them. It’s that simple.

One of the project followers, Louis Zeun, gave me few suggestions which were very helpful and inspired me to keep on going with adding new features. This led me to thinking about having modular infrastructure in place so that I (and others) could easily develop some more specific pkgs like Authentication or Admin console. Hopefully, the some new code will show up soon, so keep watching it! :)

Github Project Page

Happy coding!


Posted in Go, Programming | Leave a comment

Gnome and Unity font aliasing and rendering fixed

A few years back, Ubuntu started to play around with different fonts and different settings for antialiasing and hinting, making the desktop look boldish, and fonts bigger than they should be. This was not too much of a problem because old versions of Gnome (2.x) had a great tool for changing the font settings, however, since Unity interface came along, these options disappeared from GUI. Nowdays, Gnome3 is getting popular and since Unity uses the same settings manager, we’re left out of options.

Or not…

Although there’s no graphical tool anymore to set this in an easy way, there’s an interface tool called ‘gsettings’ which can be used to accomplish the same goal. Basically, everything that Gnome team decided that is not ‘simple enough for users’ is removed from GUI tools. And there goes our favorite font configuration GUI :(

To get back that fine, slick look of your fonts that don’t make you feel like a half blind person, open up your terminal and try out these commands:

 gsettings set org.gnome.settings-daemon.plugins.xsettings hinting 'medium'
 gsettings set org.gnome.settings-daemon.plugins.xsettings antialiasing 'grayscale'
 gsettings set org.gnome.desktop.interface document-font-name 'Arial 10'
 gsettings set org.gnome.desktop.interface font-name 'Arial 10'
 gsettings set org.gnome.desktop.interface monospace-font-name 'Monospace 10'

Hope your desktop looks sane again :)

Posted in Linux Desktop | Tagged , , , , | Leave a comment

Yay! I got certified!

After many, many years of avoiding any type of computer skills certification program, recently I was kinda cornered by, the person with the most evil job description in the world – my project manager, so, I had to apply for the two: Applogic cloud system operators and Applogic applications architect exams.

Unlike some other exams, like RedHat ones I’ve seen online, these actually surprised me in a positive way! Questions were obviously submitted by some nerd – system admin – Applogic expert, and unless you REALLY know how things work, which could only be learned through months or years of actual experience, his ego clearly shouts out ‘Youuuu shall not paaaaaass!!!’.  45 questions on each exam, out of which some were easy, but most of them were tricky and slick, and your score needs to be above 80%. I would definitely not suggest to any Applogic newcomer to take these exams, until their skill set and experience reach the point where almost everything becomes a routine.

Anyway, I passed both and now I’m the proud owner of two CA Applogic certificates! Hooray!

Posted in Cloud | Tagged , , | Leave a comment

Does it all have to be on the Web?

When designing a distributed system, at some point we need to decide which communication protocol and data types to use for communication between different system components. Large portion and philosophy in our design is directly dependent on this decision, as it affects how we handle reliability, flexibility, security, scalability and performance of the whole system.

This is also the point where huge mistakes are easily made. Community doesn’t help much either, because in this particular domain (Application-to-Application protocols and data types), people are usually divided on two different approaches:

  • RPC, SOAP and WS-* style (let’s call it SOA group)
  • REST (Representational State Transfer) architecture

Each group has its strong and weak sides (I’m not going to cover those here, as they are widely covered already). In environments where compromises are not an option (for example: latency/performance) SOA group is the first choice, but this really makes the difference only in extremely large scale environments like Google, Yahoo and others like them. Pretty much everywhere else, REST should be a clear winner. However, I don’t feel really comfortable with that too, and that’s why I’m writing this.

REST architecture was designed for distributed hypermedia systems and it was designed in parallel with HTTP/1.1 protocol by Roy Fielding. That’s why it’s often referred to as RESTful HTTP. But that’s where we can get it all wrong.

Great deal of HTTP protocol success comes from extreme flexibility of HTML and XML languages. Because of their robustness, and HTTP downfalls, in Application-to-Application (not Application-to-User/Browser) communication we’re forced back to convenience over correctness, creating huge bloats where we actually shouldn’t. Programmers tend to write distributed RESTful systems on top of TCP/HTTP without even realizing the lack of correctness.

Not so long ago, Google stepped out with new protocol design, called SPDY (speedy). The idea was to incorporate SPDY at layer6 (session layer) below HTTP, to achieve number of things TCP/HTTP alone do not offer, which in turn results with much faster loading of Web pages and better overall user experience. Some of the key points in spdy design are:

  • Session multiplexing. Clients can send and receive parallel requests/responses over single tcp connection
  • Bi-Directional communication. Both ends can act like “client” and “server”
  • Server can push non-requested data frames, but related to client’s request

Whether or not spdy gets IETF approval some day (topic for some other discussion), it got me thinking how correct and powerful would it be to have such a protocol in developers’ arsenal for App-to-App communication. Currently, spdy design process is on second draft and more and more I see new features/changes being made to properly support today’s Web standards.

How about an lightweight spdy clone designed solely for App-to-App communication and not for Web? How about “lightweight” HTTP on top of such clone? Where are the boundaries then, between protocol utilization and message self-descriptiveness bloat?

Posted in Philosophy | Tagged , , , , | Leave a comment

Hello world!

Hello world indeed. This is the first post on my fresh new blog. For years in past, I’ve somehow managed to escape being part of blogging community, not because I don’t like bloggers in general, but I hate bloggers who got nothing to say really. Ok, I don’t really hate them, instead I feel sorry for search engines :)

Lately, I got myself more into world of developers. I was doing some research and a little bit of coding, but day after day, the more I learned, more I found myself pulled in. The field I was exploring (and I still do in fact) was about distributed systems, protocols and programming languages involved, client-server philosophies and all sorts of other things. In a meantime, I wasn’t lazy and I learned Google’s Go language more deeply and I must say that I quite like it, so Go will have its place in my posts.
Anyway,  those are the topics I tend to write about. I also tend to release some code, but more on that in future posts.

After reading numerous discussions throughout community, more I felt like writing something, but I needed a place where I can feel like home. I hope you’ll find some of the material useful.

Posted in General | Leave a comment