Archive | Operating System RSS for this section

Something Broke!

Linux has come a long way since the days of non-existent wifi drivers and flash plugins. I dare say Linux is ready for non-technical users. At least I would say this if I didn’t still occasionally get random errors. You know, like this one:

thanks for letting me know

Awesome, something broke! Thanks for letting me know! Twice!

This one is a common one, and easily fixed. However, I never remember what the specific incantation for this one is, so I always have to Google it. But no more! Today I record the solution in my log!

The Solution

This one is caused by some process crashing. When a process crashes, apparently it dumps some junk into /var/crash that presumably the developer of said process knows about and cares about. Unfortunately I’m not said developer, and I don’t care about a one time crash of some random process. I do care that I’m getting 2-3 useless popups whenever I boot my Linux partition up. Let’s fix this:

NOTE: Consider looking in /var/crash to see what happened before randomly deleting all the stuff in it.

$ sudo rm /var/crash/* $ sudo init 6

After your machine reboots, you should be good to go.

As The Dust Settles

Rumors are swirling that Windows 8’s days are numbered. Windows 9 will allegedly be officially unveiled as soon as next month, and with it many of the changes in Windows 8 are being reverted. The start menu is back, Modern UI applications can run in a window, and the charms menu is dead.

All throughout the land, the people cheer! The beast is dead! A new age of enlightenment is upon us! Yet, amid the celebration, there stands a man who doesn’t look so cheerful. While the rest of the kingdom toasts the demise of Windows 8, I think of what could have been.

Don’t get me wrong: Windows 8 was terrible. In fact, Windows 8 was so bad that it drove me to abandon Windows and switch to Linux full-time. The fact that Microsoft has so radically changed their course is a good thing; a little humility will do them good. However, Windows 8 had a lot of innovative ideas. They may have had implementation issues, but most of these so-called anti-features could have been great. Unfortunately, the little failures that ruined the experience has taught the industry the wrong lesson. The industry’s takeaway from this fiasco is “people don’t want this.” However, I believe the less on to be learned is “if you’re going to change something, it must be perfect.”

Today, I’d like to talk about some of the innovative features of Windows 8; why I think they are great, and what I think went wrong.

The “Modern UI”

Since the dawn of the Graphical User Interface, we’ve used what is known as a “desktop metaphor.” The idea is that at the bottom, we have a desktop. On this desktop, we can put various things. We can put programs on our desktop, much like we put pens and paper clips and such. We can have “windows” open, much like the papers we write on. You know this story, you are probably reading this in a browser that is a window open on your computer. Tell me, when is the last time you got any actual work done with this window configuration?

yay_windows

I’m going to go with “never”. Sure this has probably happened to you, but I’m guessing you quickly maximized a window and restored order. If you do work with multiple windows, you probably arranged them like this:

do_this

…or maybe like this:

maybe_this

I’ve always been a fan of this configuration myself:

i_do_this

You most likely painstakingly arrange your windows so that they use the most screen real estate possible, except in cases where the program can’t use the space:

non_maximized

So, what’s the point? The only real thing we gain out of this arrangement is familiarity. Humans are by nature resistant to change. Something may be better, but it’s different and that scares us. But there are other options than the desktop metaphor. While there are few mainstream examples, tiling window managers offer a different take.

In a tiling window manager, windows cannot overlap. A window will take up as much space as possible, and if multiple windows are visible, the window manager will lay them out next to each other in various configurations. Since the window manager handles resizing and such, there is no need for window decorations and sizing controls.

The problem with these is the fact that they are hard to use. They require a lot of keybindings, and extensive config file editing. They fall squarely in the “fringe” of software. There is one mainstream tiling window manager though: the Windows Modern UI, formerly known as “Metro.”

When a Modern UI application is launched, it will become full-screen by default. You can then “snap” applications into up to four vertical columns, depending on your screen resolution. You can then do some simple window arrangement and sizing with your mouse. Unfortunately, while traditional tiling window managers are needlessly arcane and complex, the Modern UI is overly simplistic. You are limited to the one arrangement.

But the real problem is much bigger. This will be a recurring theme, but the main issue with the Modern UI is the fact that legacy applications don’t use it. Nobody tried it because none of their applications used it, therefore they never got used to it. Since none of their applications use the Modern UI, the Modern UI is, by default, “bad.”

Even Microsoft’s own software by-and-large didn’t use the Modern UI. The vast majority of Microsoft’s software that shipped with Windows 8 uses the traditional windowing system. To this day there is not a version of Microsoft Office that uses the Modern UI. Of the software that used the Modern UI, it tended to lack functionality. The Modern UI versions of OneNote and Skype have vastly reduced functionality compared to their desktop equivalents. It also didn’t help that most of the Modern UI applications that shipped with windows had banner ads baked in!

What do I propose? It seems simple to me: eliminate the desktop. Get rid of it period. All traditional desktop applications now run within a Modern UI window. They lose their window decorations, and can be closed using the standard method of grabbing the top of the window and dragging it to the bottom. If the application shows child windows, these windows will be displayed within the frame of the parent application with window decorations. These windows cannot be dragged outside the frame they are shown in. In effect, the application becomes its own desktop.

The Start Screen

Gone from Windows 8 is the start menu. Since Windows 95, there has been a little “Start” button on the bottom left corner of any Windows desktop. When pressed, it displays a little menu with all your programs displayed in a convoluted tree structure. If told me you’ve never seen an incarnation of this, I’d call you a liar; it’s that ubiquitous. Around the time of the introduction of the Start Menu was the introduction of keyboards with a special “Windows” key. The purpose of this key is primarily to show the start menu, so you don’t even need to click the button.

The Start Menu is gone in Windows 8. In its place is the start screen. The Start Button was also removed completely in Windows 8, but brought back due to popular demand in Windows 8.1. In Windows 8.1, if you click the Start Button the Start Screen is shown.

The Start Screen is basically a full-screen Start Menu. However, the tree is hidden, and instead you get a more tablet-like arrangement of your programs. This is shown as a grid of “tiles” that function as souped-up icons. These tiles can dynamically show information of a program’s choosing. A photo gallery application might show a mini slide show. A news application might show headlines. An e-mail application can show incoming messages.

I’m sure somebody will try, but I don’t believe a reasonable person can argue that these tiles aren’t an improvement on the old arrangement. Screen resolutions have gone up since the advent of icons, and it’s about time we put it to use. However, the problem with tiles is that their functionality is limited to Modern UI applications.

Granted, work would have to be done to update a traditional application to make use of the dynamic tile functionality, but there’s no reason it couldn’t be done. It would be quite easy to do. Unfortunately, Microsoft arbitrarily decided that only Modern UI applications should have access to this functionality. Traditional UI applications don’t even look the same:

start-screen

On the bottom right, you’ll see some traditional applications, surrounded by nice pretty tiles. Those icons are forever doomed to be static and ugly because Microsoft wills it.

Charms Bar

Of all the things I’ve talked about, this is actually the one I’m most disappointed about. Live tiles live on in Windows 9, and I suspect the Modern UI tiling window manager will live in Windows RT. However, the Charms Bar is just dead. The Charms Bar is, in my opinion, the most innovative feature of Windows 8, and it is dying sad and alone because nobody understands it. You may even be wondering what I’m talking about. The Charms Bar is this thing:

charms_bar

You most likely recognize it as that annoying bar that pops up and gets in your way when you try to close a maximized program. It’s notable for doing seemingly nothing. Maybe you figured out that the shutdown option is hidden in there. The Charms Bar is actually amazing! …in theory. In practice it’s hampered by the fact that it’s tied to the Modern UI. Let’s talk about what this thing actually does.

Search

The Search charm is a context-aware search feature. If you’re sitting at the Start Screen, it functions as your standard Windows search. But if you’re in a Modern UI application? In this case, the search charm does whatever the application wants it to. In a text editor it may search the document for a string. If you’re in an instant messenger it might search your buddy list. If you’re in an e-mail application it might search your inbox for a message.

The Search charm, as with all the charms is controlled by the active application. The idea is that no matter what you’re doing, if you want to search, the function will always be located within the Search charm. This provides a consistent way to search across all applications!

Share

The unfortunately named Share charm isn’t actually about posting to Facebook. That said, posting to Facebook is a valid use of the Share charm. The Share charm is a context-aware way to send data to another application. An image editing application might be able to send an image via email, or post it online via Facebook. To do this, you could use the Share charm, and select the appropriate application. Similarly, an email program could support opening an image in an image editor via the Share charm. The Share charm is all about outputting data to other applications, and can almost be seen as a fancy graphical version of the UNIX pipe!

Start

The Start charm is just the Start Button. It was never gone, just moved.

Devices

The idea here is similar to the Share charm. The Devices charm allows you to interact with any appropriate hardware device. An image editor might list cameras (to import images), scanners (to scan images), and printers (to print images) here. A remote controlling application might show a saved list of computers to access here. A slide show application might show projectors here.

Settings

The Settings charm is an application-aware settings menu. Need to configure your program? Just go to the settings charm! Not terribly ground-breaking, but a nice touch and a good item to round out the Charms menu.

So, What’s The Problem?

The Charms bar has a few issues. The most grievous is the fact that, once again, it’s tied to the Modern UI. On the desktop, the Charms bar does nothing but get in the way. Nothing about the Charms bar is intrinsically tied to the way the Modern UI works, yet Microsoft has forbidden traditional applications from accessing it.

Additionally, Microsoft has done a very poor job educating users on what it does. Some of the names are confusing, and going into the menus doesn’t really help clarify things. That’s not to say that people can’t learn. We learned what these hieroglyphs mean:

start_button

power_symbol

apple_menu

What Microsoft forgets is that we find these to be “intuitive” because we all learned what they mean years ago. What about the word “Start” makes you think “my programs are in here?” What does the silhouette of an apple have to do with shutting your computer down? What is that circle icon even supposed to be? We know these things because we were told. There really needed to be a plan in place to educate people on the use of the Charms bar. But instead of doing a little outreach, they’ve killed one of the most innovative features of Windows 8.

Blinded By Dollar Signs

You may have noticed that none of these issues are really that big of a deal if you use Modern UI applications. So why wouldn’t developers adopt the Modern UI the way they adopted new GUI Libraries in the past?

To install a Modern UI application, it must be downloaded from the Windows store.

That right there is what killed Windows 8. Microsoft saw how much money Apple was making in its App Store and thought “I want that.” What Microsoft failed to realize is that Apple has always carefully curated the hardware and software of its ecosystem, and this is something Apple’s customers like. Windows does not have this culture.

While not as free as Linux and friends, Windows has always been an open platform. Sure, the operating system itself is closed, but it does not restrict what you can accomplish. It doesn’t impose its will, it provides tools and a workshop and tells developers to have fun. This all changed with Windows 8.

To get an application into the Windows store, you must first get an account with Microsoft. This account costs $99/year for companies, and $19/year for individuals, but any price of admission immediately alienates a class of developers. This is actually the catalyst that fueled my switch to Linux. One can’t really create an open source application using the Modern UI, because any fork would have to be submitted to the Windows store as well, costing that person money.

Assuming you are undeterred, and get the account, you must submit your application to Microsoft for approval. Microsoft can at this point reject your application, or force you to change it. Prior to Windows 8, there was no approval process; anybody was free to ship any program they wanted. Microsoft didn’t know or care what you did to your own computer.

Finally, Microsoft gets a cut of your profits. This was the real issue for most major software firms. They had a choice: deploy a Modern UI application through the Windows store and share their profits with Microsoft, or continue to deploy traditional desktop applications and keep 100% of the profits.

This was not a difficult choice. Unlike with Apple and their App Store, Windows users do not expect to get their software from the Windows store; they are used to getting their software via different methods. Windows users did not flock to the Windows store, and those that did look found nothing but garbage. The backlash against Windows 8 was severe and immediate. Unfortunately this vicious cycle could only end with the death of the innovative features of Windows 8. The real losers here are the users.

Change is hard, and I can appreciate that people might not agree with me. But people weren’t even given the chance to give the changes an honest shot! No user can be blamed for forming the opinions that they have; the Start Screen, Modern UI, and Charms Bar were dead on arrival. They were sacrificed on the altar of greed for a quick buck.

I, for one, mourn their loss.

Doesn’t Play Nice With Others

For the last few weeks, I’ve been looking into making a Printer Module in Haskell. I must say, it’s been a pretty miserable experience. Not the Haskell part, that was ok. No, my issue is more basic. It seems that Haskell doesn’t like to share.

My plan was to build a module in Haskell to do the printer logic, then link that module as a library into C, which will be imported by the Core as normal. A preliminary look about the internet confirms that this is supported behavior. There are a few trivial examples peppered throughout the internet; so I set to work, confident that this was a solvable problem.

Giving Cabal A Shot

Cabal is Haskell’s package management program. In addition to this, it serves as Haskell’s answer to make. With a simple call to:

cabal init

You are presented with a series of questions about your package. After filling out the form (and selecting library), Cabal creates a Setup.hs file. Calling:

runhaskell Setup.hs configure runhaskell Setup.hs build

…produces a .a library for your package. Success, right? Unfortunately when you try to load that you get a linker error stating something to the effect of “can not be used when making a shared object; recompile with -fPIC“. After hours of research I have determined that this is because the Haskell’s libraries have not been compiled using the -fPIC, which prevents them from being used with a static library.

Trying GHC

The Glasgow Haskell Compiler can be used to compile libraries directly. Having given up on cabal, I decided to try to cut out the middle man and use GHC. After much tinkering, I came up with a Makefile that worked, which I will preserve here for posterity:

COMPILER=ghc HS_RTS=HSrts-ghc7.6.3 OUTPUT=dmp_printer_module.so all: "${COMPILER}" --make -no-hs-main -dynamic \ -l${HS_RTS} -shared -fPIC dmp_printer_module.c \ DmpPrinterModule.hs -o ${OUTPUT}

This makefile compiles any required Haskell scripts, as well as a C “glue” source file that initializes and finalizes the Haskell environment. More on what goes in that file can be found in the GHC documentation.

Cool, good to go, right? Wrong.

Couldn’t Find That Dyn Library

So, I started adding things to the module to make sure it doesn’t break. After adding some dependencies, and trying to recomplie, I start seeing this error:

Could not find module `[SOME_MODULE]' Perhaps you haven't installed the "dyn" libraries for package `[SOME_PACKAGE]'?

After more hours of research, it turns out that for a module to be used in a shared library, it must be compiled as one. Seems logical, but that would imply that all module developers have to go through this nightmare. And the developers of any dependencies they use have to have done so. And so on…

Since even Prelude hadn’t done so, I set off to figure this out. After poking about, it turns out that Debian provides a package ghc-dynamic, which provides the dyn libraries from Base. I installed it, and things were checking out. However, the dependencies I was using still did not work.

After some more reasearch, I found a suggestion that I re-install all my Cabal packages using the --enable-shared flag, which would provide me with my dyn libraries. I gave it a shot, but since my dependencies’ dependencies hadn’t done so, I got the same errors.

Some more research suggested that I could delete the .ghc folder in my home folder, then re-install all Cabal packages. This would force them to rebuild. However, I encountered the same issues.

The Man In Black Fled Across The Desert…

I’m beginning to feel a bit like Roland, ceaselessly chasing after the Dark Tower. Every time I get there, my journey starts over again. I clear one roadblock, and there’s another there waiting for me.

It seems like there isn’t any real interest in calling Haskell from C, and I must say that I am extremely disappointed. Calling C from Haskell works great, but when asked to share its toys with C, Haskell takes its ball and goes home.

I’m sure it’s possible to do meaningful work in Haskell, and call that from C. However, the amount of work I would have to do to attain that goal is not something I’m willing to accept. For this reason I am shelving Haskell for the time being. Maybe I’ll pick it again for some other project, but it’s not a good fit for DMP Photo Booth.

What Is The Deal With Open Source?

So maybe you’ve heard talk of something called “Open Source”. Maybe a friend or family member told you about it, or you heard about it on the internet. Maybe you use it. Usually, when somebody professes the virtues of open source, they speak of it as this great force of good protecting the masses from the Evil Corporations. If Superman were a piece of software, he would be open source, here to protect you from Lex Luthor’s new giant mechanized monster: The Micro$oft Abominatron.

The point is that these arguments are emotional. Conventional wisdom would tell you that this is because Computer People are bad at explaining things. The fact is that open source is a difficult concept to internalize. It seems ridiculous; if your application is so great, why put it out there for free for anybody to use or modify? Why not sell it and become the next Bill Gates?

There are good answers for these questions, and in this post I’d like to discuss some of them. This is a post for the lay person, and as such I’ll keep the technical jargon to a minimum. That said, when you speak in analogies, little important details can get lost in translation. Please bear with me. That said, let’s get down to it.

What's The Deal With...

So, what is the deal with Open Source?

A Brief Overview

An open source application is an application whose source code is freely available to modify. These applications are licensed under one of various open source licenses. These licenses are similar to the one you click “accept” to when you install a program. The license describes the restrictions on what you can do with the open source application. There are very permissive licenses, such as the MIT and BSD, where they basically say “do what you want with this software, just give credit where credit is due” (allowing code to be used anywhere).

There are also very restrictive licenses, such as the GPL license which basically says “you may do what you want with this software, so long as any derivative works are distributed under this same license” (effectively ensuring the code is only used in other open source projects).

Whatever license an open source project chooses, the intent is the same. The code for the application is freely available somewhere, and you are free to use the code for the software in your project, so long as you respect the terms of the license. You could take an open source application, change its name, and release it as your own if you wanted to. Just make sure you respect the terms of the license. However, this isn’t typically done for various reasons.

While open source means that the code is free for anybody to access, this doesn’t mean that just anybody can make changes to the original project. Typically the group behind a project will regulate who can make changes, and how changes get made. Some organizations are quite open about accepting changes from the community. Some don’t accept changes at all. In this way, the “owner” of an open source project controls the software, and maintains the quality of the product.

Why Should You Care?

At this point you may be thinking “that sounds all fine and good, but why should I care about any of this?”. It does seem awfully warm and fuzzy, but there are some very good reasons you should care.

Money

The most obvious reason is money. Suppose you want to type a document. You’d need a word processor, the obvious choice is Microsoft Word. So you go to Microsoft’s online store and search for Word. You click on the link for Word and see that it costs $109.99. Just for Word. The entire Microsoft Office bundle costs $139.99. They will also allow you to pay them $99.99 every year for Office.

What if I told you that there was an open source office suite called LibreOffice that you could download right now for $0? LibreOffice can do everything office can do, but costs nothing. I use it for all of my word processing needs, and I can tell you I will never pay for another copy of Microsoft Office ever again.

Don’t want to spend $19.99/month for Adobe PhotoShop? Download GIMP for free. Don’t want to spend $119.99 for Windows? Download Ubuntu for free. The list goes on and on…

Respect

The very nature of being open source means that the makers of software need to respect the community. Recently, Microsoft released Windows 8. Windows 8 brought with it a whole new user interface. This new interface hasn’t been a huge hit with the consumers, but Microsoft knows that it is free to do whatever it wants because we’re stuck with them. There is a lot of software that only works on Windows, and with Microsoft being the only legal source of obtaining Windows there’s not much we can do. It’s either Microsoft’s way or the highway. Not so with open source.

Any open source project has a third option: Fork. “Forking” an open source project means to make a copy of the source, give it a new name, and begin developing it on your own. Much like a fork in the road, the old project keeps going on its own, and the new project bears off into a new direction. Forking an application successfully is a very tall order. The old project is seen as the “definitive” version, with an established user base. To fork an application, you have to prove that your version is legitimate. Just because you forked a respected project doesn’t mean you get that project’s userbase; you must build your own user base. (this is what prevents people from “stealing” your project. If they fork your project, and the fork surpasses your version, that means they’ve added value you haven’t)

This happens often in open source. Sometimes the two forks coexist peacefully. Sometimes the fork dies, and the original regains any lost users. Sometimes the fork surpasses the original. However the cards fall, the community had its say; the better application won. This is a win for the community as well.

Back to the original topic, with this third option of forking, the developers can’t take their users for granted. At any time the userbase could revolt. Most likely, it’s not practical for you to fork an open source project. However, odds are that if your issue is real, others have it as well. Given enough unrest a fork is inevitable.

Transparency

Lately all the talk in the news has been about NSA monitoring. The NSA has been collaborating with various software development firms to code methods into their software to allow them access. Even if you support what the NSA is doing this should concern you. After all, if the access point exists for the NSA, it exists for anybody. Given enough time, hackers will find these access points, and they will gain access.

This sort of thing cannot happen with open source software. Any change that gets made to an open source project is done in the open. Anybody can see the change, and the exploit has a very great chance of being discovered. There can be no back-door dealings with anybody if the house you’re coming in the back of has no walls.

What’s In It For Developers?

All that is great, but I still haven’t answered the original question: why would anybody do this?

Personal Gratification

Starting at the most basic, developers want personal gratification. Maybe they want to feel like they did something good. Maybe they want to feel part of something big. Maybe they want to stick it to the man. Maybe they want to give back to the community. Whatever it may be, it can be a powerful motivator.

Open Source As A Portfolio

Much like an artist, a developer needs a portfolio. Open source software can be a good way to demonstrate knowledge and experience with a programming language or framework. A developer could make a closed source application on their own time, but they wouldn’t be able to show it off for fear of “revealing their secrets”. Meanwhile, developing in the open, you could show a potential hiring manager specific examples of your work. Additionally, being an active contributor on a respected open source application can be a major boon to a resume.

Branching Out

Suppose a developer works at a company exclusively coding in Language X. Let’s call this developer “Brad”. Development is a fluid field; what is the big language today might be nothing a year from now. It is very important to not become stagnant. Unfortunately for Brad, he’s an expert in Language X, he’s barely touched up-and-coming Language Y. Even worse for Brad is the fact that his employer doesn’t care about his professional development. Brad’s job is to develop in Language X, and incorporating Language Y is a large risk that the financial people don’t feel is worth taking. Brad has no choice but to gain experience in Language Y on his own time.

Enter open source. Brad can find an open source project written in Language Y, and contribute. Brad could also start his own project in Language Y if he wanted. Either way, he’s got options. Development is a unique field in that one can reasonably “gain experience” on their own time. One can’t be a doctor, a lawyer, or a school teacher on their own time. But if Brad contributes to open source using Language Y, not only does he become more proficient in Language Y, but he has tangible proof of his experience in Language Y.

Similarly, a developer might have burned out on the technologies they use at work. Open source represents an opportunity to work on a project that excites them. The bean counters at work don’t care what you want to do, but nobody can tell you what open source project you have to commit to. A developer can contribute to a project that excites them, and they are free to move on at any time.

It’s Their Job

Shockingly, companies even pay developers to work on open source projects. For various reasons that I’ll get into later, companies use open source software. The direction of open source projects is often driven by those who contribute, and for this reason companies often hire developers to develop the aspects of an open source application that they care about.

But Think Of The Corporations!

Don’t worry, I’ve got them covered. As I mentioned in the last point, even companies care about open source. Even the tech giants like Google and Apple depend on and contribute to open source.

Using Open Source Libraries and Languages

Software companies often use open source libraries and languages. A library is a small piece of code that does a specific thing. It is not a full application in its own right, but can be critical to making a full application. Think of it like LEGO. If the individual LEGO blocks are a language, and the finished model is an application, then libraries are smaller bits made up of blocks. You assemble these smaller bits, then piece them together with some additional blocks to make the whole model.

Libraries are critical to making applications. Libraries also make up a large portion of available open source software. Libraries using restrictive licenses like GPL and friends don’t tend to make it into closed source software, libraries with permissive licenses like BSD and friends are often incorporated into closed source projects.

Similarly, programming languages themselves are often open source. This allows organizations to add features to a language to suit their needs. This sort of thing would not be possible with a closed-source language.

Outsourcing Development To The Community

Often, companies will open source products. This allows the community to scrutinize the software, adding new features, fixing bugs, and providing various other forms of feedback. This costs a company nothing, but can save it millions in development costs. The company can then use this feedback to improve the product. Typically, companies will open source components of a larger project that they sell. This allows them to crowdsource bug fixes and the like while still being able to sell their product.

They aren’t too concerned with the community forking their product because they still remain the “definitive” version, and there is little risk of them losing business.

Alternative Forms Of Income

Some tech companies just aren’t in the business of selling software. Google is one of the largest and most respected software development firms operating today, but they don’t actually sell software. Google is in the business of selling demographics data to advertisers. This puts them in the position of being able to freely leverage open source software to make their products. Google provides the google search engine, which is implemented using a variety of open source and closed source software. All of which is transparent to the users. Additionally, Google provides the Android Smartphone OS, which is based on the open source Linux operating system. Google provides countless other free applications and services using open source, and they can do this because these services allow them to glean more and more demographics data.

Similarly, Apple Computers is in the business of selling devices. Sure, Apple sells software, but most of their revenue is from selling various iDevices. Apple is also another huge contributor to open source. While Google uses open source to implement services (see point 1 in companies), Apple outsources development to the community. (companies point 2) Apple backs various open source projects, and incorporates them into their software. Mac OSX itself is a descendent of the BSD UNIX operating system, an open source operating system. A good example of this model is WebKit. WebKit is a piece of software used to read and display web pages, commonly referred to as a “rendering engine”. Apple is a major backer of WebKit, devoting many developers to it. When Apple deemed it ready, they incorporated it into their Safari web browser. Many other web browsers use WebKit as well, including Google Chrome. All largely thanks to Apple.

Another example of this is Apple’s open-source CUPS printing library. If you’ve ever printed something from a non-Windows computer, you probably have Apple to thank.

What Can You Do?

The final piece of this puzzle is: what can you do to contribute? Open source is a collaborative effort, and every little bit helps. However, being the lay person, you probably aren’t going to be contributing code or documentation. You can still help.

The First Step

The first step is to use open source. Personally, I believe that just using an open source project is contributing. By using it, you are legitimizing the project. You are a real person using this software, somebody actually cares! Given all the reasons I outlined above, this seems like a no brainer.

Secondly, you can spread the word. I’m not suggesting you be that guy who rants and raves about open source; frothing at the mouth about how you should use it or you’re stupid and a Bad Person, and support the “Evil Corporations”. No I’m suggesting that when given the opportunity to tell somebody about an open source application in a socially appropriate way, to rationally tell them about Open Source Project X, and that you use it because of [Logical Reason]. Is your friend frustrated with Microsoft Office? Tell them about LibreOffice. It’s that easy.

A Bit More Dedicated

If you’d like to do more, there are plenty of options. Many larger open source projects have statistics gathering features built into their applications. Things like Debian’s Popularity Contest, and Firefox’s Telemetry. Typically, these features can be disabled, however if you don’t mind a minor loss of privacy, you can considering enabling them.

Open source projects do different things with this data. Some use it to determine what kinds of people are using the software, and what things they use most often. This allows them to focus their efforts on the things that “matter”. Letting projects collect this sort of data allows them to improve the things that you use, and that you care about. This helps them because if they know what people want, they can better provide it.

Some open source projects might sell this data. Some might say this is an unacceptable breach of privacy (I count myself among those who would say that. For this reason I usually disable these features), but the fact is that they do this to get funding to improve the project. If you allow an open source project to sell your demographics data, it will make their data more valuable and help contribute to the quality of the software.

Either way, given the nature of open source, whatever information they are gathering from you is all out in the open for anybody to see. While you may not know what you’re looking at, there are people who do. Since typically only major open source projects do this sort of thing (minor projects don’t have a large enough user base to get any useful/valuable data), odds are almost 100% that somebody has independently reviewed it. A quick google search can tell you what they are gleaning and what it’s being used for. Personally, I take privacy very seriously, and I’d advise anybody to do a bit of research before enabling these sorts of features.

Another good way to help is by submitting bug reports. Nobody likes buggy software, but many bugs are nearly impossible to catch in testing. If you’re using an open source application and you find a bug, tell somebody! Ideally, the application comes with a built in automated bug reporting system. Poke around a bit, and see if there’s a “report bug” button. If so, give it a shot. Along with your message, these systems will send information about your configuration that will be useful to the developers.

Failing that, most projects have an easily accessable bug reporting system, usually this can be found on their website. Go ahead and submit a bug report. Try to be as detailed as possible, but if you don’t know something, don’t sweat it. Even if they can’t duplicate the bug, they know that something is wrong, and what general direction to look.

Giving Back

Finally, if you’re feeling generous, you could donate. I’m not going to name “worthy causes” here, but I shouldn’t need to. Do you have an open source application you really like? Donate to them. Maybe they have an online store, and you can get a T-Shirt or coffee mug. It really is that easy.

Now For Something Completely Different

For as long as I’ve been trying (successfully or not) to program, I’ve been using C like languages. When I was a kid, I struggled in vain to learn C++. As an adult, I learned Java. After that, I used Java as a spring-board into the wonderful world of C Like Languages: C, C++, Perl, Lua. I wrote hello world in dozens of others as well. I found myself proudly proclaiming that “I’m confident I could pick up any C Like Language!”

Then one day I thought “what about the rest of them?” Sure, maybe I can speak Latin. Maybe I can pick up any Latin based language with relative ease. But what if I need to move to China? I speak C, but what if C falls out of favor for something else? I decided it was time to try something else.

But What?

C and it’s cousins broadly represent the Procedural and Object Oriented paradigms. We’ve all been there and done that. Procedures and Subroutines may or may not take arguments, do something, and may or may not return a result. The global or local state may or may not change. Loops happen. I don’t think it is a stretch to say that these are the two most mainstream paradigms. For the purposes of this blog post, I’m going to lump the Procedural and Imperative paradigms together. I understand that they are not the same thing, but roughly speaking, the Procedural paradigm is an evolution of the Imperative paradigm.

This leaves us with Functional Programming. Unlike the functions of an Object Oriented or Procedural language, the functions of a Functional language closely resemble those in math. In math, f(x+2), where x = 2, will always return 4. Similarly, a function in a Functional language will always return the same result given the same input. Where a function in a Procedural or Object Oriented language describes the steps to perform some task (usually, this involves some sort of loop construct), a function in a functional language just describes what the result of some function is. (usually involving recursion) f(x+f(x-1)) adds x to the result of f(x-1), which recursively adds x-1 to f(x'-1) and so on until the end of time.

So, what programming language to choose? Many languages support functional programming to an extent. Python, Lua, and even C# if you squint hard enough. However, these languages are multi-paradigm. As such, it will be easy to fall back into my C Like ways. What about Lisp?

Lisp is a family of languages: Common Lisp, Scheme, Clojure, Emacs Lisp. Sure, I could learn one, and theoretically be able to transition with ease, but this isn’t a level of fragmentation that I’m comfortable with. In addition, Lisps are multi-paradigm, so I’m more likely to not keep the faith. Which leaves me with…

Haskell

Haskell is a “pure” Functional programming language. While any useful program must have the side effect of reading from or writing to some external source, Haskell places that part of the program neatly in a corner. Let’s talk about some of the neat features of Haskell:

Lazy Evaluation

Expressions in Haskell are evaluated lazily. What this means is that a value isn’t computed until it’s needed. Let’s take a look at an example:

embiggen :: Int -> [Int] embiggen x = x:embiggen (x + 1)

This function takes an integer, and creates a list out of it. (Lists in Haskell behave much the same way as a normal linked list: O(1) insertion, O(n) traversal) The passed-in integer is pushed on to the front of the list resulting from embiggen (x + 1). You may have noticed that this function will go on forever. While maybe not ideal, this is ok in Haskell because of Lazy evaluation. The infinityeth element of this list will not be evaluated until it’s needed!

show (take 5 (embiggen 5)) [5,6,7,8,9] show (embiggen 5) !! 17 22 show (embiggen 5) [OMG INFINITE RECURSION!!!!!]

In the first example, we call the library function take, which returns a list containing the first n elements of the passed in list. In the second example, we call the library function !! (all operators are functions), which is the list indexing operator, which returns the nth element of the list. In a language with strict evaluation, the list would need to be completely evaluated before these things could happen. In Haskell, it doesn’t! Only in the third example, where we attempt to call show on the entire list, does infinite recursion occur.

Tail Call Optimization

This is one of those terms that gets thrown around a lot, but what does it actually mean? The short answer is that it prevents a recursive function call from consuming a new stack frame. In a language without this feature, if foo() calls itself, the new call will consume a new stack frame. This will cause a stack overflow if allowed to go on too long. In Haskell, this isn’t a problem because of Tail Call Optimization.

Type System

Haskell’s type system is quite different from the usual type systems. Sure there are Ints, Chars, Floats, Bools, and the like, but there’s more to it than that. Haskell is very strongly typed. There is no casting in Haskell, if a function takes an Int, there’s no getting around giving it an Int. However, the whole type system operates in a manner similar to generics in languages like C++ or Java. Take the following examples:

putInList :: a -> [a] putInList thing = [thing] addStuff :: (Num a) => a -> a -> a addStuff lhs rhs = lhs + rhs

The first function takes some arbitrary type, and returns a singleton list containing the passed-in argument. Much like generics, the argument shouldn’t depend on any type-specific behavior.

The second function takes two arguments of the same type that behaves like a number (Int, Float, Double, and friends) adds them, and returns the result. the addStuff function accomplishes this by specifying that arguments of type a should be members of the Num Typeclass. Despite the word “class”, Typeclasses aren’t the same as classes in Object Oriented languages. You CAN think of them as being the same as Java’s interfaces. When you create a type, you can make it a member of any number of Typeclasses. You must then implement the functions specified by the Typeclass, just like when a class in Java implements some interface, it must define the methods of that interface.

This is just the tip of the iceberg, but I’m sure you’re beginning to see how you can make very general functions in a very type-safe way.

Partial Function Application

A feature of functional programming is higher order functions. This means that functions can take functions as arguments, and functions can return functions. While nice, this isn’t exactly a new concept. Even C supports this to an extent with function pointers. What is new is partial application of functions. Recall the addStuff function above. It takes two arguments of type a and returns a result of type a. Now let’s look at an example:

doNumFunc :: (Num a) => (a -> a) -> a -> a doNumFunc f a = f a addThree :: (Num a) => a -> a addThree a = addStuff 3 a

The doNumFunc function takes a function that takes a type a and returns a type a (This is what (a -> a) means), and a second type a, and returns a type a. doNumFunc calls the passed in function with the second passed in argument. The addThree function takes a type a and returns a type a. addThree takes an argument, and calls the addStuff function we defined earlier with its argument and 3. How does this all pan out?

addThree 3 6 doNumFunc addThree 3 6

Seems pretty straightforward, right? Though, this isn’t very re-usable. What if I want to add 4? Do I need to define a function addFour? No, I can partially apply addStuff. If you call a function in Haskell with less arguments than it takes, it will return a function that takes the remaining arguments and returns a result! Observe:

doNumFunc (addStuff 3) 3 6

Now things are getting cool. By calling (addStuff 3), we’ve created a function that takes a type a, adds 3 to it, and returns the result! You can’t do that in C!

Getting Started

Excited yet? You know you are, don’t try to act like you’re not. But how does one get started? Like any language, you need two things to begin: a compiler/interpreter and some reading material.

Environment

First up, you should go download the Haskell Platform. This package contains your compiler/interpreter and all the standard libraries. Haskell can be compiled, or interpreted. Or, you could use ghci, the interactive interpreter, if you just want to doop around and try stuff.

If you’re running a Linux distro, haskell-platform is likely in the repositories. In Debian or Ubuntu, it’s a simple:

sudo apt-get install haskell-platform

… and you’re set! Unfortunately, there doesn’t seem to be a great IDE for Haskell. NetBeans definitely has nothing to offer in this regard. Luckily for us, Haskell is simple enough to not really need an IDE. GEdit, the default text editor that ships with Gnome, has built-in syntax highlighting for Haskell. Just enable the built-in terminal in GEdit to test stuff and you should be good to go. I like to run ghci in the embedded terminal to test functions as I write them. Plus, as you code, you can periodically attempt to load the script in ghci to make sure everything is formatted correctly and you haven’t messed up your syntax/types.

Literature

One of the biggest barriers to learning a new language is money. Nobody wants to put down cold hard cash on learning something new when what they have is working just fine. Luckily for us, you can learn you a Haskell for free! Learn You A Haskell For Great Good is a beginner’s guide to learning Haskell aimed at developers coming from C Like Languages. The best part is that the whole book is available to read online for free! You can check it out for the low-low price of zero dollars. If you like it, maybe you buy a copy for your bookshelf. Or maybe you just spread the word.

Whatever you do, you should have a good base of knowledge in Haskell. At that point, you can just consult Hoogle to learn more.

I Installed Something Called “Debian Unstable”

So, after weeks of procrastinating, the day finally came; it was time to upgrade Ubuntu. As many of you likely know, Ubuntu has a 6 month release cycle. New versions come out in November and April. The release of Saucy Salamander marked the first time I’ve had to deal with a Linux distro upgrade since I was running Fedora 8 back in 2008 (Not counting a brief encounter with Debian Squeeze just prior to using Ubuntu). As I recall, my attempt to upgrade to Fedora 9 was a disaster. Nothing worked, and it was a huge amount of effort. It was so bad that I decided to cut my losses and just go back to Windows Vista.

Needless to say, I wasn’t terribly excited about upgrading to Saucy. Finally, about a week ago I decided to stop being lazy and just do it. While it wasn’t quite the disaster that Fedora 9 was, I wouldn’t call the upgrade “smooth”. The first thing that I noticed was the fact that I could no longer lock the display. Since my cat likes to perform unauthorized refactoring of my code if I leave the display unlocked, this would not do. I did some googling, and it turns out that Gnome removed gnome-screensaver in Gnome 3.8. Gnome-screensaver controlled, among other things, locking the screen. All of the functionality was rolled into GDM. Ubuntu uses LightDM, so in order to protect my precious codebase I have to either switch it out for GDM, or use a Gnome shell plugin. First, I tried to install GDM, but every time I logged in I would get a popup saying that GDM crashed. I switched back to LightDM and installed the plugin. Everything seemed to be going fine, but things were just a bit more wonky. Every so often, when I’d go to unlock, the screen would freeze. I could just hope it was working and type my password and press enter to unlock it, but I like things to work right.

Not a huge deal though, I thought. I decided that I’d just grin and bear it. However, things continued to come apart. I went about re-compiling DMP Photo Booth and its modules to make sure everything was working correctly with the updated software versions. For the most part it was, but my working splash screen was broken. When shown, the window would pop, but the image on the window would not show. It seemed my call to while (gtk_events_pending()) gtk_main_iteration(); was returning early. In the course of my investigation I decided to open the Glade UI file to make sure everything was right. The only problem? The version of Glade shipped with Saucy has a major bug that causes it to crash when you open a file with a dialog in it. You can read the bug report here.

For me, this was the straw that broke the camel’s back. It was time to try a new distro.

Let’s Meet Our Contestants!

Ubuntu GNOME

I’ve been running Ubuntu for a while now, and have been mostly satisfied with it. I do have some concerns about their direction, but I’m not quite ready to break out the torches and pitch forks. However, I much prefer Gnome 3 to Unity, so I figured it was time to switch to a Gnome-centric distro. Luckily, there is a Ubuntu distro that focuses on Gnome: Ubuntu GNOME. My concern with this is that they seem to have manpower issues. I don’t feel like getting attached, just to have the rug pulled out from under me, so I won’t be using Ubuntu GNOME.

Fedora 20

I feel that it is fair to say that Fedora is to Red Hat as Ubuntu is to Debian. Fedora is an old, mainstream Linux distro that has the financial backing of a large company behind it. It is likely to be around for years to come. Better yet; Fedora is a Gnome distro. Fedora 20 ships with Gnome 3.10, the current latest and greatest.

Back in 2008, I tried to run Ubuntu. Back then, it didn’t “just work”. Fedora did. Maybe it was time to don my Fedora and come home to my first Linux distro. I downloaded the live DVD for Fedora 20, and booted it up. Everything seemed great; Gnome 3.10’s fancy new UI elements were incredibly profound. Mozart and Da Vinci would surely be reduced to tears at the sight of their magnificence. I was sold. I started the installer and got to work. I set my language, hostname, and then went to configure my partitions. …aaaaaaand no hard drives detected. Crud. After some googling around, this seems to be a known issue. The Googler told me that I could disable SELinux and it would work, but no luck. I was told that I could use the non-live image and it would work, but no luck. Well, so much for that idea. I filed my Fedora installation media in the round file and decided what to do next.

Debian Sid

It seems that the cool kids are running Debian these days. I’ve used Debian before, and had good experiences with it (uptime on my Debian Squeeze home server shows 102 days). The one sticking point is how old the software is. That is, old in the stable release; Debian Unstable has up-to-date software. The cool kids assure me that Sid is still more stable than Ubuntu or Fedora, so I decided to give it a shot.

The Installation

Installing Sid is slightly more tricky than Ubuntu or Fedora. Here’s the installation blurb on the Debian Wiki:

Use the stable installer to install a minimal stable system and then change your /etc/apt/sources.list file to testing and do an update and a dist-upgrade, and then again change your /etc/apt/sources.list file to unstable and again do an update and a dist-upgrade. ... If this seems too complicated you should probably not be using unstable.

With those words of encouragement, I set off to work. I downloaded the Debian 7 net install media, and installed. I followed the wizard, setting up the usual things. For partitioning, I formatted my /boot and / partitions, and preserved my /home partition. I spoke about this before in a previous post, but the short answer is that this keeps you from having to back up your data and settings. You should probably still do that stuff in case you do something stupid, but if all goes well you won’t need to.

When the time came to select additional software, I deselected everything. I finished the install and rebooted. The system booted up to the command line, and I logged in and su‘d to root. Now that my Wheezy install was complete, it was time to upgrade to Jessie. This is accomplished by editing /etc/apt/sources.list and changing every instance of the word wheezy to testing. While I was at it, I added contrib and non-free so I could get things like my wifi driver and flash. Next order of business was to install apt-listbugs and apt-listchanges. These two packages change apt-get to warn you of bugs in software, so you don’t blindly install some software that will light your computer on fire. After that:

apt-get update apt-get dist-upgrade

…then I ate lunch. This process will upgrade my system to testing, and it takes a while. After it’s done, I repeated the steps above, replacing all instances of testing with unstable in my sources.list. Additionally I had to delete the lines:

deb http://URL/ testing/updates main deb-src http://URL/ testing/updates main deb http://URL/debian/ testing-updates main deb-src http://URL/debian/ testing-updates main

…these don’t exist in Unstable.

While the apt-get dist-upgrade was running, it was time to watch some TV.

Finally, when apt-get dist-upgrade completed, I had a Debian Sid system. One problem: it was command line only.

A Few More Things

First things first, I needed to set up sudo:

adduser [username] sudo init 6

After the reboot, my user is set up to use sudo.

I had to install some software. First up is Gnome:

sudo apt-get install gnome

This is starts a 1.3 GB download, so I watched some more TV. When that finished, I needed to install my wifi driver so that I could disconnect my temporary cat-5 cable:

sudo apt-get install firmware-iwlwifi

Next up is the Debian laptop applications. This package installs the software that would be installed by selecting the laptop task in tasksel:

sudo apt-get install task-laptop

I rebooted into Gnome. I logged in and connected to my wifi. Since I preserved my /home partition, all my settings are still set up from Ubuntu, so there is very little asthetic configuration to be done.

The gnome package in Debian installs some other things besides Gnome. Among those things is LibreOffice, so I don’t have to worry about that. However, there are a few usability packages to install:

sudo apt-get install flashplugin-nonfree sudo apt-get install synaptic sudo apt-get install pkg-config

At this point I had a basic system set up. Now it is time to make sure DMP Photo Booth still works. Since I preserved my /home, NetBeans is still installed. However, there is no JDK installed. This was an easy fix:

sudo apt-get install openjdk-7-jdk

Now it is time to install the dependencies for DMP Photo Booth:

sudo apt-get install libmagickwand-dev sudo apt-get install libglib2.0 sudo apt-get install libgtk-3-dev sudo apt-get install cups libcups2-dev

Some of the development tools I need still aren’t installed. GCC is installed, but for some reason gdb isn’t. Also, to do work on the trigger, I’ll need avr-gcc:

sudo apt-get install gdb arduino sudo adduser [username] dialout sudo init 6

Finally, I need to install Glade to modify DMP Photo Booth’s UI:

sudo apt-get instal glade

And that’s it!

Impressions

It took me a good half of a day to get it all working, but so far so good. Iceweasel is still a thing, but mozilla.org thinks it’s the latest version of firefox, and my addons still work so I’m not going to worry about it. Plus, weasels rule and foxes drool.

Glade is working now, but DMP Photo Booth’s working screen is still broken. However, I’m beginning to think it never was really working right in the first place.

All in all, it’s been a successful load. 1 week in, and I still don’t miss Ubuntu. Hopefully Sid is good to me, and I’ve found my salvation from getting a new Distro version every 6 months.

Last Train Out Of Cairo

After recovering from Christmas, and the terrible events of late 2013, it’s time to put my nose back to the grindstone with the printer module. My latest task: make the printer module not be terrible.

What’s so bad about the printer module, you ask? The short answer is all the things. All bad. Every single one of them. It’s slow. It doesn’t print right. It consumes way too much resources. It makes my cat sad.

The Brotherhood Of The Printer Developer

It all started when trying to figure out this whole printing thing. It turns out that printer development is one of those secret development clubs. There are no tutorials, the API documentation leaves something to be desired, and printing in Linux is bad in general. In short, it’d be easier to join the Illuminati than to infiltrate the Dark Cabal of Printer Developers.

Just by reading the CUPS API documentation, it’s not difficult to hash out a simple hello world type printer application. However, as anybody who has ever printed something knows, printing has lots of knobs to fiddle. However, the CUPS API does not seem to have functions corresponding to things like paper size and DPI.

During my research, I managed to turn up all of one StackOverflow post on the topic. The gist of it being “you set that up yourself using PostScript and send that to CUPS.” It also provides a sample implementation using Cairo.

Seems Reasonable

I decided to give it a shot. If nothing else, it would be a good introduction to the Cairo library for me. In my youth, I was fond of using Java’s Graphics2D library to make all sorts of fancy UI elements. In slightly oversimplified terms: Cairo is the GTK equivilent to Graphics2D. This isn’t entirely accurate: Cairo is a vector graphics library that GTK just happens to have adopted. Cairo is very usable outside the context of GTK; it can author a variety of file types including pdf and postscript.

I decided I’d use Cairo to author postscript within the printer module.

The Implementation

cairo_surface_t * base = cairo_ps_surface_create( "[temp_file].ps", [WIDTH], [HEIGHT]); cairo_surface_t * image = cairo_image_surface_create_from_png( "[photo_strip_filename]");

First, I create 2 cairo_surface_t pointers. A Cairo surface is sort of like a canvas that you paint on. For those of you familiar with Java’s Graphics2D, you can think of it like your Graphics2D instance. cairo_surface_t is the base class of all Cairo surfaces, there are a variety of surface types for things like PostScript, PDF, PNG, X Windows, or whatever else. The first surface is an empty PostScript surface that represents our finished product. The second surface is created using our .png formatted photo strips.

cairo_t * working = cairo_create(base);

If a cairo_surface_t is your canvas, then cairo_t is your brush. Think of it like a Java Stroke object. Right here, we are creating a new cairo_t from our base Cairo surface.

cairo_set_operator(working, CAIRO_OPERATOR_DEST_OVER); cairo_set_source_surface(working, image, 0, 0); cairo_paint(working);

The basic idea is that you apply operations to a cairo_t, then you apply your cairo_t to a cairo_surface_t. Here, we are compositing our PNG surface over our PostScript surface. First we set the operator of our cairo_t to composite over the top. Next, we set the cairo_t to have our image Surface as its source surface. Finally, we call cairo_paint which will apply our cairo_t to the base surface.

cairo_surface_show_page(base);

This call saves our PostScript file.

cairo_surface_destroy(base); cairo_surface_destroy(image); cairo_destroy(working);

No C function is complete without a bunch of cleanup at the end. Here we call cairo_surface_destroy to free our cairo_surface_t pointers and then we call cairo_destroy to free our cairo_t pointer.

PS: You’re Doing It Wrong

That all seemed pretty great right? I thought so too. Here’s the problem: go check out that PostScript file you just created. Notice how it is 200 MB? Yeah…

It turns out that enormous PostScript files is a common problem. While we could just make sure to delete this file when we’re done, we’re still creating this gigantic file, then shoving it down our printer’s throat. My printer is on WiFi, so it takes a good 2 minutes to print this file, and brings my computer to a crawl while it’s doing it. No user is going to want to wait 2 minutes for their photo strip to print.

The second problem is actually a “feature” of PostScript. PostScript is a document layout language, and due to this the printer will take your PostScript file’s word for what it wants done. This sounds nice, until you realize that Cairo isn’t actually a PostScript authoring library. Cairo’s ability to tune a PostScript file is pretty limited. Specifically, this is a problem for things like DPI. I’m trying to print high quality images at 600 DPI. However, Cairo can’t set this in the PostScript, so the printer ends up spitting out a massively blown up copy of the image. This will not do…

The Solution

So, I’m back to square one. PostScript is a dead-end, and CUPS won’t let me customize my job. What to do…

I thought back to my hello world printer application. I was able to print a random image that was blown up to the size of my paper. What if I print an image that’s exactly the correct size for my paper? I gave it a shot, and sure enough there was no scaling issues!

I can set my printer to 600 DPI, then print my (600 * 4) x (600 * 6) image and it just works, just like Some Guy promised it would. All is once again well in the world.

Plus, I got some Cairo experience under my belt. Look forward to fancy curved lines and gradients in future versions of DMP Photo Booth! (Joking, I promise)

%d bloggers like this: