Sunday, January 15, 2012

Sourcefabric, the Open Source Newsroom

The non-profit Sourcefabric builds digital open source newsrooms to support quality, independent journalism. Traditional news organizations have taken a major beating from the Internet, but the Internet has also created opportunities for a free press in countries that have never had one before, and Sourcefabric is part of journalism's path to the future.

Two of my favorite quotations are "Freedom of the press belongs to those who own one" (A.J. Liebling), and "Money changes everything" (Cyndi Lauper and Tom Gray).

Tie these together with the Golden Rule, "the one with the gold makes the rules" and the struggles and turmoil of modern journalism come into sharp focus. It's a different game now as print news has drastically declined, TV news is limping along in a stale 40-year old format and inexplicably tries to replicate this moldy experience on the Web, and we get our news from a multitude of non-journalism sources like Facebook, YouTube, and Twitter.

Online news publishing should be a great boon because the cost of distribution is low, and the reach is Internet-wide. But the business model is difficult because users don't pay; it is primarily advertiser-supported and enslaved to SEO (search engine optimization) voodoo, which means Google is the giant tail wagging the publishing industry dog. News publications get paid for driving traffic, rather than for publishing good quality material.

Change is a multi-edged device, and while the Internet and high tech have damaged traditional news organizations (think buggy whips), they have also lowered the barriers to entry and widened journalism's scope. Digital photographs, videos, and audio streams are considerably easier to edit, produce, and distribute than in the olden ways. Journalists are truly mobile and news publishers are not tied to a physical location, or to large investments in printing presses and broadcast studios.

So it's a different world now, and potentially a better one because independent and non-profit news organizations can compete with the establishment mainstream press (what's left of it), and fill niches that the traditional news organizations are not interested in. Like working weekends and holidays, breaking news as soon as it's ready in multiple formats and media, and covering important topics that may not please advertisers but are of interest to readers and viewers.

Real journalism is more than sitting down to a WordPress blog and emitting deep thoughts. It is a skilled profession, and it has always been a technology-dependent business. It may not be apparent to the reader, viewer, or listener of news, but putting a story together and then publishing it is a complicated, labor-intensive process. This is where Sourcefabric comes in.

Sourcefabric is still a young project. You may have heard of the Campware (Center for Advanced Media-Prague) suite of news and content management software; Campware was created in 2005, and then spun off as an independent organization, Sourcefabric, in April 2010. Campware was originally created by the Media Development Loan Fund. The MDLF has been around since 1995, a "a mission-driven investment fund for independent news outlets in countries with a history of media oppression."

Sourcefabric provides software, hosting services, and training. Currently there are three main software applications: Airtime, Newscoop, and Superdesk. Each one plays a different role in gathering and publishing news, so let's take them for a spin.

Airtime is for managing an Internet radio station. It runs on Apache, PostgreSQL, PHP, the RabbitMQ messaging system, and a whole lot of other good FOSS. It has a one-click installation on Debian and Ubuntu, and can be installed on pretty much any Linux distro. You can also try out a live demo, which is an excellent way to get acquainted with Airtime. You can upload your own audio, create playlists, and schedule and listen to your own programming. Its browser-based interface lets you control it from any computer anywhere (figure 1).

Figure 1: A new Airtime installation waiting for audio files and scheduling.

Airtime supports streams from your own server, and also Icecast and Shoutcast, and has an fallback function that switches to a live stream if it fails for any reason. You can play MP3 or Ogg files, customize the interface, and add the Mixxx live DJ software for additional functionality like managing live broadcasts. Airtime is the most mature Sourcefabric application, and the average Linux nerd won't have any trouble getting it up and running.

The hardest part is learning audio production if you don't already know how to do that: how to record, edit, and optimize audio streams for broadcast. There are other great FOSS applications for this like Audacity and Ardour.

Airtime has nice user management; you can allocate time slots to different people and then they manage their own slots. It has some JQuery widgets that automatically update a Web calendar for your users, and creates searchable archives.

Airtime is a free download, and the Pro version adds hosting and support, ranging from $50 to $500. You can also purchase custom services if the prefab deals don't meet your needs.

Newscoop is an open content management system specialized for journalists and newspapers. Like Airtime, it runs on a LAMP stack and requires a bit of work to install and configure. Here be good detailed installation instructions.

Newscoop manages the online text article publication process, and has a modular dashboard (figure 2) that shows scheduling, article status, and other information on one page. It follows the traditional print publishing layout model, with different sections (News, Arts, Politics, and such), and modern features such as commenting, RSS feeds, multimedia, and subscription services.

Figure 2: The Newscoop Dashboard displays all the publication status information you want in one view.

It has a caching module to speed up page delivery, and it automatically tailors the presentation for different browsers on different devices. It includes SEO (search engine optimization) tools, multi-language support, and geolocation support for customizing content delivery by location. Newscoop has support for OpenStreetMap, Google Maps, and Mapquest Open. It is theme-able, and has a live demo to play with.

Airtime and Newscoop manage online broadcasting and publication. Superdesk manages the story creation workflow. A story might pass through several hands before it's ready for publication: assignment editor, reporter, a second contributor, photographer, copy editor, editor.

Superdesk has a customizable workflow for tracking and editing content. It outputs content in formats that are compatible with external design software such as Adobe InDesign and InCopy. It uses calendars and notifications so you can store information for future stories, and then be reminded when the time comes. Superdesk pulls together news from RSS feeds, social network sites, and wherever else you want. And it has business tools such as sales trackers, classified ad builder, and order tracking. Superdesk is scheduled to be released sometime in 2012, and you can see the source code at the Sourcefabric Wiki (free registration required.)

The essence of journalism has not changed: gathering and reporting news. Sourcefabric provides modern tools for independent journalism, and free/open source software makes projects like Sourcefabric possible.

Comments (0)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here

10 Most Popular Linux.com Stories in 2011

Here it is, 2011 is almost at an end. As we get ready for 2012, we wanted to take a look at the stories that were most popular with Linux.com readers in 2011. We've pulled together a list of the 10 most popular stories from the year for you to enjoy over the holidays.

To get this year's top 10, we looked at our site statistics and pulled the most popular original pieces published in 2011. The winners were not too surprising, but do help us figure out what kind of stories to line up for next year. Without further ado, we present the 10 most popular pieces published in 2011.

Earlier this month, we took a look at the 10 most important open source projects of 2011. It looks like Linux.com readers were quite interested in the results.

Another popular post from 2011 was Brian Proffitt's look at the best 7 Linux distributions for desktops, laptops, enterprise, and more.

We also looked at the best netbook Linux distributions from 2011. Though netbooks weren't quite the hot property they were in previous years, lots of Linux folks seem to love their netbooks.

Choice is one of the things that makes the Linux desktop great. Our round-up of alternative window managers for Linux was a huge favorite when we published it early in 2011. Long live FVWM!

Nathan Willis took a good, long look at GNOME 3.0 in March. Though Willis wasn't entirely pleased with what he found, Linux.com readers seemed to be deeply interested in the new GNOME.

SSH is one of the most-used tools on Linux, so it's little surprise that a piece on OpenSSH tips and tricks cracked the top 10 for 2011. I still use the FUSE trick for mounting remote filesystems on a regular basis.

The Fedora is strong with the Linux.com community. Carla Schroder's Fedora 16 preview drew quite a bit of attention when we published it in early November. That might have something to do with the slew of new technologies and refinements (like GNOME 3.2) that were shipped with F16.

Another winner from Carla, a IPv6 "crash course" published in April. Carla took a look at the advantages of IPv6 and some tips and tricks for working with IPv6 on Linux.

Calendar clients are easy to find on Linux. Calendar servers? That's a bit trickier, but Nathan Willis gets it all sorted out. Can you believe a FOSS project from Apple makes the list?

The first new version of LibreOffice shipped in early 2011, and Brian was there to put it through its paces. LibreOffice has come a long way since then, of course.

There you have it, the most popular stories of 2010. What about 2012? Tell us what kind of stories you'd like to see in 2012, and beyond.

By the way, if you're curious about last year's top 10, check out the top 10 from 2010 too.

Comments (1)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here

10 Best Linux.com Tutorials of 2011

One of my resolutions for the New Year is usually “learn more” about things I’m interested in. Of course, that includes Linux. If you’re looking to learn more about Linux this year or next, you might want to check out some of the tutorials that we published throughout 2011.

Every year, we try to line up topics that will be useful to the Linux.com audience. That includes new users who’ve never touched a command line before, as well as expert users who have a lot of experience under their belts. Now, in no particular order, I present some of the best tutorials from 2011. Enjoy!

Nathan Willis provides a little advocacy for his favorite editor, GNU Emacs. The GNU Emacs 101 explains why users would want to try Emacs, and gives basics to get you started.

Carla Schroder explains how to take advantage of “one of the best features of virtualization” with Linux KVM, how to do live and offline migrations with KVM.

As Carla writes, “we had a System V (SysV) type init daemon to manage Linux system startup, and it was good. It was configured with simple text files easily understood by mortals, and it was a friendly constant amid the roiling seas of change. Then came systemd, and once again we Linux users were cast adrift in uncharted waters. Why all this change? Can’t Linux hold still for just a minute?”

No, no it can’t. Sorry. But at least we have a great intro to systemd from Carla to help deal with the new init hotness.

You might use GNU grep all the time, but glark is an alternative that may be better for you in some situations. If you’d like to learn more about glark, we’ve got a short tutorial on glark to get you started.

Networking problems can just ruin your day. Whose fault is it? Carla provides a tutorial on figuring out where network problems lie.

In a follow-up to the troubleshooting tutorial, Carla digs into figuring out who is on your network using Linux tools. This includes popular utilities like Nmap, fping, iperf, and more.

Don’t want a “fancy GUI” to manage your task list? You’re not alone. But what if you want to sync your CLI app with your Android device? In that case, take a look at Nathan’s tutorial on using todo.txt in conjunction with Android.

These days, there are a lot of reasons for users to want to be able to browse the Internet anonymously. Nathan provides a really good tutorial on browsing anonymously with Tor on Linux.

As a follow-up to the first Tor piece, Nathan provided a second tutorial on running invisible services and bridges for those who need more privacy than Tor does by default.

They say that those who don’t know history are doomed to repeat it — but if you don’t know how to use your history in Bash, you’re just not making the most of your system. If you want to learn how to use your history, we have five tips for working with GNU Bash history on Linux.

Have some topics you’d like to see covered in 2012? Let us know in the comments!

Comments (0)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here

What's New in Linux 3.2?

What better way to kick off the new year than with a brand new kernel, fresh out of Kernel.org? Linus Torvalds released the 3.2 kernel on January 4th, with improvements in the Ext4 and Btrfs filesystems, thin provisioning features, a new architecture, and CPU bandwidth control.

The last major kernel release was in late October of last year. That release, if you recall, included support for OpenRISC, Near-Field Communications, and the cpupowerutils project.

Last week, Torvalds pushed 3.2 out into the world with a pretty short announcement and opened the merge window for Linux 3.3. Naturally, this release has the usual fixes and new drivers, but also some notable new features that are worth taking a look at.

Let’s face it, users have an insatiable desire for more and more storage. Which means larger and larger hard drives. Unfortunately, the maximum filesystem block for Ext4 systems has been stuck at 4KB, which is a bit of an inconvenience for users who are working primarily with larger files. With the 3.2 release, users can increase the block size to a maximum of 1MB.

Btrfs has seen a fair number of small improvements with 3.2 as well. For example, Btrfs now gives more detailed messages when it encounters bad blocks or other errors. In addition, you can now do a manual inspection of the filesystem, so you can query Btrfs about what files belong to bad blocks.

Linux adding a new architecture is not unusual, but the Hexagon Processor from Qualcomm is a bit different. Most of the CPUs supported by Linux are like the x86/AMD64 architecture that most of us use in our desktop/laptop machines or servers. (“Like,” in that they are for a wide range of general purpose machines.)

The Hexagon is a “general-purpose digital signal processor designed for high performance and low power.” It can be used for things like processing video, or could be used for the OS and digital signal processing. You probably won’t be running Linux Mint on this one anytime soon, but it might be in your next set-top box or something else that requires a lot of processing power for media but not general purpose computing.

On the TCP side, Google really wants to get search results to you a bit faster. So much so, in fact, that they have developed a better packet recovery algorithm for TCP. Google’s algorithm, “Proportional Rate Reduction,” is meant to improve latency.

As time goes on, the Linux kernel just gets more and more flexible when it comes to fine-tuning resource control. The 3.2 kernel has two notable features that will be very useful in this regard.

The first is the CPU bandwidth control, which allows admins to specify how much CPU time a process group can use in a period of time. For example, users could use the scheduling features to limit a group to N CPUs runtime in a specific period (measured in milliseconds), or limit groups to a fraction of a CPU. When the group hits its limit, it’s scaled back until the next time period starts.

For instance, you could give a time period of 1000ms and a quota of 1000ms. That would give a group 1 CPU worth of runtime. Or you could give the group a quota of 100ms with a time period of 1000ms, which would limit the group to 1/10th of a CPU runtime.

The second feature allows over-provisioning of storage so that space isn’t wasted. Wait, what? How is over-provisioning a good thing?

Imagine you have a system with 500 users given a storage quota of some arbitrary amount, like 15GB. You give each user the maximum amount of storage you think is reasonable or necessary, but the odds are in many cases that users will only use a fraction of the storage. For instance, in a lot of Web hosting scenarios you might provision users with 10GB of storage but in reality many users are only going to use a few hundred MB, not 10GB.

The thin provisioning features added to the Linux Device Mapper allow admins to over-provision, so you don’t have to have enough storage to cover the maximum storage scenario. This allows companies to avoid spending a lot of money on storage that they don’t need – an especially nifty feature now, considering the rising prices of hard drives.

Kernel development never sleeps, so while 3.2 is making its way out into the world kernel folks are busily hacking on 3.3 and beyond.

One of the more interesting features that we might see in 3.3 (remember, no promises) is a lot of Android integration. Greg Kroah-Hartman wrote in mid-December that “the next linux-next Linux kernel release should almost boot an Android userspace, we are only missing one piece, ashmem, and that should hopefully land in my staging-next tree next week. The patches are still being tested and cleaned up by others… hopefully, by the 3.3 kernel release, the majority of the Android code will be merged.”

That won’t mean a complete Android kernel in mainline Linux. Hartman says that there’s still more work to do, but it’s significant progress nonetheless.

By the way, if you’re into “ancient” kernels, Kroah-Hartman also released the 2.6.32.53 kernel on Friday, January 6th.

Comments (0)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here

Tone Your Photos in a Hurry with Delaboratory

Can you do high-quality photo correction with an application that is simple enough for normal (meaning non-photo-gearhead) people to use? Jacek Poplawski thinks so, and that's why he created Delaboratory, a point-and-click photo adjustment app that does not require a lifetime's worth of darkroom experience to understand.

Delaboratory fills an unusual niche, because it is easy-to-use, but it excels at a task most casual users don't particularly pay much attention to. Intrigued? You can test drive Delaboratory for yourself; visit the project's Google Code site and you will find binary packages for Debian, Ubuntu, openSUSE, Arch Linux, and Windows, plus source code bundles you can compile for other distributions. The current release is version 0.5, which landed in October.

There are certainly other "lightweight" photo editors on the Linux landscape — most of the simple image-managers like F-Spot offer some basic adjustment controls these days. But Delaboratory gets its edge by performing its correction operations in ultra-high, 32-bit-per-channel mode, and letting you use Lab color.

What that means is that all of the pixel-blending math happens in high-bit-depth data structures where even multiple correction steps will not cause round-off errors, and in a color-space that treats brightness and color separately. In contrast, using the standard 8-bit-per-channel RGB editing space usually results in clipping, unsightly bands of color, and muddy-looking tones — more so the more corrections you make.

Yet Delaboratory also differs from your typical photographer's utility because it employs a "keep it simple" approach. That does not mean there is a shortage of tools (although there are fewer options than you will find in a high-end adjustment tool like RawTherapee or Rawstudio). Instead, Delaboratory gets its simplicity by focusing on what Poplawski considers the crucial step in producing good photos — color correction — and by employing subtle UI conventions, like offering simplified controls, and treating all of the operations you use like Photoshop-style "adjustment layers."

Fire up Delaboratory for the first time, and you will see a stripped-down looking editing window. There is no directory browser or thumbnail "filmstrip;" you open one image at a time. A basic histogram sits on the top right, below it the stack of layers, and at the bottom an array of text-labeled buttons. Currently the tool set includes a curves editor, a channel mixer (which lets you adjust the proportion of each color channel in the image), a blur tool, an unsharp mask tool, and a suite of color-space converters. Click on any tool button, and it pops into place on the layer stack. A checkbox next to it controls whether it is active or inactive, and two buttons allow you to tweak the layer's effect — one for the tool's settings, and one for the blending mode.

When you click on one of the tool settings buttons, a dialog pops up. You can make changes to the sliders or curves and see the results instantly, but one of the confusing aspects of this setup is that the application allows you to keep several of these dialogs open at once. They are unlabeled, so if you have two "curves" applied — say, one for luminosity, and one for color — it is easy to lose track of which is which.

Assuming you don't get confused, you can add conversions, adjustment curves, blurs, or other operations, and they stack up on top of each other. Whenever you want to, you can click on a lower level in the stack, and see each layer's effects one at a time. There is no need for undo/redo or a "history" list because all of the changes are reversible. Delaboratory works this bit of magic by only applying the effect to a preview; the final operations are not done until file export time. If you are used to image operations permanently and immediately altering the file, this takes some getting used to, but it is a change for the better.

Delaboratory ScreenshotThe controls themselves are well-implemented, which is not something you notice every day. The difference is in how the sliders and histograms are labeled with what they control. A lot of image adjustment applications show curve graphs with the words "red" "green" or "blue" sitting somewhere in the title bar, but Delaboratory shows you the range of colors directly on the graph. Same with the mixer, which is not usually intuitive to work with. Plus, each tool dialog comes with pre-set buttons that let you quickly experiment: adjusting curves and mixtures by simple fractional ratios, inverting colors, and so forth. You may not always want to use a preset, but having the option is welcome.

These small touches may not sound like much when you are thinking in RGB terms, but they are available in all of Delaboratory's color modes; Lab, LCH, XYZ, HSL, and so on. Understandably, a lot of people have trouble visualizing Lab color, and some other options (such as XYZ) do not even correspond directly to the visual spectrum. Making those color spaces easy to work with is a noteworthy feat.

The down side to Delaboratory's stack-based "adjustment layers" is that you have to adjust your thinking to be stack-based as well. If you ever learned stack-based programming or Reverse Polish notation, then first of all, you have my sympathy, but second of all, you know how difficult it can be to adapt. In my estimation, Delaboratory is a bit too strict with this stack metaphor — you can only remove the top-most layer from the stack, and you cannot rearrange the layers. Many of Delaboratory's simplified sliders and tool are easier to use than their counterparts in traditional apps, but the benefits do not extend to every aspect of the UI.

Poplawski describes his own workflow on the wiki: he shoots images in raw mode, and converts them to TIFF format using RawTherapee, usually doing simple exposure adjustment and white-balancing at the same time. Then he makes his color corrections in Delaboratory, and exports the results. If there is a retouch needed (such as blotting out a spec of dust), he can do that in GIMP, because after the color corrections are made, the high-bit-depth work is pretty much finished.

As to why color correction is so important — important enough to warrant writing an application that does nothing else — Poplawski's argument is that natural-looking skin tone is what separates a good photo from a lousy one. For the most part, that is true — the first things we look at in a photo are the people. Getting skin tones right can be tricky, especially when artificial lights are involved. Similarly, a lot of people like converting their images to gray scale or sepia-toning them.

Delaboratory can definitely make both of those tasks easy, while preserving image quality better than most of the 8-bit alternatives. However, at the moment Delaboratory should still be considered a work-in-progress. Although it does what it does skilfully, and includes a lot of nice UI touches, it also requires you to keep another raw converter application at your fingertips and general-purpose image editor (e.g. GIMP), too. Hopefully when it adds more basic image editing functionality, more users will give it a try. Chances are they will like what they see; the challenge for the development team will be to integrate that experience into a less complicated editing workflow.

Comments (0)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here

The Best of Linux.com Weekend Project from 2011

Weekends are for relaxing, spending time with friends… and tackling those tech projects that you never have time to get to during the week. The weekend project is one of the most popular features here on Linux.com, and we had a bumper crop of excellent projects in 2011. Here’s 10 of the best from 2011, which include everything from better ways to upgrade your system, to getting a leg up on Web projects.

Reproducing your current environment on when you upgrade isn’t always easy. Nathan Willis undertakes a from-scratch reinstall, and provides valuable lessons that you can use when you tackle your next migration.

Debian developer Joey Hess starting writing etckeeper after unsatisfying experiments with other people’s attempts to shoehorn /etc/ into a Git repository. A few people had done so successfully, but ran into two major problems: what to do when a package installation made changes to the directory or a file (i.e., and the user could not enter the usual log entry), and what to do about metadata changes like file permissions. This weekend project shows how you can control your configuration with etckeeper.

Ever had a Web project in mind, but got stalled at the prospect of having to worry about the site design? If code, and not design, is your strong point you’ll want to take a look at Twitter’s Bootstrap.

For the security conscious, there is always room for another weapon against attackers. Firewalls, intrusion detection systems, packet sniffers – all are important pieces of the puzzle. So too is Honeyd, the “honeypot daemon.” Honeyd simulates the existence of an array of server and client machines on your network, including typical traffic between them. In this weekend project, you’ll learn how to use HoneyD on Linux to fool attackers.

If you’ve ever needed to edit one or more files to make quick changes, you’ve no doubt found that doing it using a text editor can be a slow slogging process. Linux, thankfully, has a number of tools that make it easy to do this non-interactively. One of the best is sed, a “stream editor” that can help you make quick work of filtering and transforming text. Use this weekend project from 2011 to get to know GNU sed.

One of the keys to using GNU sed successfully is knowing how to use its regular expressions. If you look over sed scripts without knowing regular expressions, the effect can be pretty disconcerting. Don’t worry — it’s not as confusing as it looks. If you’ve read the “get to know GNU sed” piece, step up your sed game with an intro to using sed regular expressions.

Just about every program has spell checking, but what if you’re grammatically challenged? Most apps don’t have a grammar checker built-in, even though they should. Want to get a leg up on your writing? Learn how to get grammar checking for your open source office suite. Note that After the Deadline not only works with LibreOffice, it also works with tons of other apps – including my favorite, Vim.

The Gentoo-based SystemRescue CD/USB is one of the very best rescue distros, packing amazing functionality into a 350MB image. It can rescue Linux, Unix, Mac, and Windows systems, and recover data from almost any media. With this weekend project, you’ll learn how to create a SystemRescue live USB stick, and recover data from failing drives.

Ubuntu switched to Thunderbird a while back, and Linux Mint has defaulted to the Moz mailer for quite some time. If you want to make the most of Thunderbird, though, you should check out this weekend project from July on adding conversations and calendaring to Thunderbird.

Have a book you want to publish? Ready to self-publish that Great American Novel that you’ve been tinkering with since high school? These days, the barriers to publishing a book are lower than ever. All you need is a manuscript and an open source application like Sigil to tame the text and massage it into EPUB format. If writing and publishing is on your resolution list for 2012, learn how to write and publish eBooks on Linux with Sigil.

Have some ideas what you’d like to see on the weekend project for 2012? Give us a shout in the comments. Happy New Year!

Comments (0)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here

How to Craft a Killer Cover Letter for Linux Jobs

You're certified, bona fide, and active in your favorite open source project, but how do you craft the clever cover letter that lands your next Linux job?

If you're lucky, your reputation precedes you and dream job offers land in your in-box every other day. If you're like the rest of us, a well-crafted cover letter can make the difference between getting a phone call, or getting shuffled to the bottom of the resume pile. Before prospective employers look down the list of Linux skills and open source project experience you've laid out on your resume, they'll do a quick read of your cover letter, which is your one chance to make a fabulous first impression.

I've long hated writing cover letters. Once you've written your resume, it's pretty much good to go, with occasional updates or tweaks to highlight skills that would appeal to specific employers. Cover letters, on the other hand, need to be written with a specific employer or position in mind. And knowing that these few paragraphs need to say more than your resume does can be nerve wracking. With a few rules in mind, you can take the torture out of writing and focus on why you are the only logical solution to an employer's high-tech needs.

First, the good news: No one wants to read your dissertation. Instead, keep your cover letter brief, yet meaty (like this rule).

Before writing anything, research the company and position for which you are applying. Read the press releases on the official company site, search the web to learn more about what others are saying about the company, and consider how your skill set will be an asset to the employer. Remember that the employer wants to know how you will meet his or her needs, so don't focus on why this job would meet your needs (unless your needs are all about focusing on the needs of your prospective employer).

In her Dice.com article called How to Target A Cover to Get the Manager’s Attention, Leslie Stevens-Huffman explains, "Don't belabor points or regurgitate the information in your resume. Create a short, compelling narrative that proves you understand the company’s needs and describes how you intend to meet them."

Gayle Laakmann McDowell, author of The Google Resume and Cracking the Coding Interview, says that how you discuss your open source work depends on the type of job for which you are applying. "If you're applying for a coding role, you should focus on the code you wrote (that is, the features)," she says. "If you're applying for a Program Management job, you should focus on the leadership aspects."

Instead of going into detail in the cover letter, you could put this open source experience under a Projects section of your resume or under Work Experience, particularly if you've spent a substantial amount of time on an open source project. "Additionally, providing a link to your GitHub page or another page where a resume screener can learn more about your coding experience is always useful," she adds.

"As a hiring manager for software engineers, I'm always happy to see that a candidate is a contributor to open source projects," says Jenson Crawford, Director of Engineering at Fetch Technologies. "Participation in open source projects tells me that technology is a passion for the candidate and it's not just a job. It also shows that the candidate is interested in making things better than they were found," he says.

Crawford agrees with McDowell when it comes to including open source experience on a cover letter. "If a candidate's open source contributions can demonstrate that the candidate has the needed qualifications or experience, include the contributions to make the connection to the hiring company's needs," he explains. "If there isn't a direct connection between the candidate's open source contributions and the qualifications listed in the job posting, or if the candidate is applying to a company without a specific job posting, include information about the contributions in a general way. Perhaps something like: I'm passionate about technology and contribute the open source projects X, Y, Z."

Let's say you are applying to be a web developer and you're interviewing for a company that requires Drupal experience. Here's your opportunity to show that you've researched the company and have the exact skills they seek. The job description says:

Experience with and a high degree of competency in Drupal is a must, including development of modules and themes, and familiarity with the Drupal API, hook system, form API, etc.

So your response to this job requirement might be something like, "In addition to developing more than two dozen Drupal modules and themes for my current employer, I'm also active in the Drupal community and recently gave a DrupalCon Lightning talk about submitting patches." Now the prospective employer knows that your resume includes required skills, but you've also added something more personal about your specific skills, which brings us to the next rule.

Showing your personality in a resume is no easy feat, so be sure to do it in your cover letter. "Not only do you want to show that you're a good fit for the position, but you also want the reader to like you," explains Kim Isaacs, Monster Resume Expert. "Appropriate use of humor, combined with a friendly and professional tone, can help endear you to the hiring manager." Still, keep in mind that the cover letter is a formal document and not an email to your BFF, so keep the tone professional, too.

If you're sloppy on your cover letter, employers can safely assume you'll be sloppy in your code or work habits. In addition to spell checking your letter, consider having a friend or colleague proofread it, too. Reread the job description and your letter — have you shown that you understand the position for which you are applying and that you're the ideal candidate for the role?

You have as long as you need to proof the cover letter before sending it in, so be sure that it doesn't make a trip to the circular file because it's sloppy.

If you're not quite ready to apply for that great Linux job, consider ways to get involved in the community at events or with Linux training opportunities.

What other advice do you have for crafting clever cover letters? Share your success stories (or horror stories) in the comments below.

Comments (0)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here

Weekend Project: Get to Know Btrfs

The Butter/Better/B-tree Filesystem, Btrfs, is supposedly destined to become the default Linux filesystem. What makes it special, and what's wrong with good old tried-and-true Ext2/3/4?

Linux supports a gigantic number of filesystems: removable media, network, cluster, cloud, journaling, virtual machine, compressed, embedded, hardware inter-connect, pseudo-filesystems that live only in memory, Mac and Windows filesystems, and many more.

You are doubtless familiar with the general-purpose Ext2/3/4, JFS, XFS, and Reiser filesystems that we use on our desktop PCs and servers. With all of these filesystems cluttering up the landscape, what is the point of yet another one? (There is even YAFFS: Yet Another Flash File System.)

The point is meeting new needs and workloads, and building functionality into the filesystem rather than relying on a herd of external utilities. Btrfs is rather like a blend of features from ReiserFS and ZFS, Sun's advanced copy-on-write/volume manager/RAID/snapshot/etc. filesystem.

Many Linux users yearn for a native port of ZFS, but its GPL-incompatible license (the Sun CDDL) ensures that Sun's implementation (now Oracle's) can't be included in the Linux kernel.

Even so, you can't keep a good hacker down, and so there are two ports for Linux. One is ZFS on FUSE, which runs ZFS in user-space. It's included in a lot of distros so it's an easy installation. The other one is ZFS on Linux. This is a build of ZFS as a kernel module for users to install, and so you get kernel support without a GPL violation because it is not distributed with the kernel.

It's great having those options to try out ZFS, and I applaud the maintainers of these ZFS projects. Still, it looks like Btrfs is going to take the place that ZFS could have owned were it not for its incompatible license. Oracle is the primary sponsor of Btrfs, and plans to make it the default filesystem in Oracle Unbreakable Linux sometime in 2012. Btrfs isn't just an Oracle project, but has a lot of community support from the Linux kernel team and many Linux distributions. Odds are it's included in your favorite distro. (Run cat /proc/filesystems to see what filesystems your Linux supports.)

So what does this amazing super-duper filesystem do? How about a handy bullet-pointed list to answer this question?

RAID 0, 1, 10COWIncremental backupOnline defraggzip and LZO compressionSpace-efficient packing of small filesDynamic inode allocationChecksums on data and metadataShrink and grow storage volumesExtentsSnapshots16 EiB maximum file size

Planned features include RAID 5 and 6, deduplication, and a ready-for-primetime filesystem checker, btrfsck. You can try out btrfsck now because it is included in btrfsprogs. (Which of course Debian/Ubuntu/Mint etc. changes to btrfs-tools, and Fedora calls it btrfs-progs.) But it is not ready for production systems yet.

Putting the finishing touches on btrfsck is the last big step before Oracle makes it the default filesystem in their next Unbreakable Linux release. Fedora 16 Linux was supposed to default to Btrfs, but now they're aiming for Fedora 17 in May 2012.

I'm a big fan of RAID 10, which is RAID 1+0, mirroring and striping. It is expensive of disks because only 50% of your total disk capacity goes to storage. But it is simple, robust, and fast. Half your disks can fail without losing your data. I got burned out on RAID 5 and 6 years ago; perhaps I had bad RAID mojo, but I experienced a lot of failures, and they are slow. It seemed the systems under my care were more adept at propagating parity errors than operating correctly. So for me, RAID 5 and 6 can sit on the back burner indefinitely as long as I have RAID 10.

16 EiB is exbibytes, a measurement close to the more commonly used exabyte. An exbibyte is 1,024 pebibytes. In comparison Ext4 maxes out at volumes with a maximum size of one exbibyte and file sizes up 16 tebibytes. However you say it, it is a lot.

Btrfs doesn't contain any database-specific optimizations, and is not a clustering filesystem. It is designed to handle very large storage volumes, protect data, simplify large storage management, and read and write fast.

A COW – copy on write – filesystem is extra-careful with writing your data. When you make a change to a file, the old data are not overwritten. Instead, the filesystem allocates new blocks for the new data, and only the changed data are given a new allocation. The downside is this creates fragmentation. So Btrfs supports online defragmentation with the

btrfs filesystem defragment

command.

COW filesystems lend themselves to easy, efficient snapshots, and Btrfs supports both snapshots and rollbacks. The easy safe way to try Btrfs is to create a new partition for testing. Gparted supports Btrfs, as you can see in figure 1.

Figure 1: Gparted formatting a 50GB partition as btrfs

Next, mount this partition. In this example the mountpoint is /btrfs-volume:

# mount -t btrfs /dev/sda8 /btrfs-volume

Now we can create a subvolume in this partition. Subvolumes are cool. They are like independent filesystems inside the parent filesystem, with their own mountpoints and options. Create one this way:Figure 2

# btrfs subvolume create btrfs-volume/test

And that's all there is to it. You'll see this as an ordinary directory in your file manager (figure 2). You don't need to worry about allocating space like you do with normal disk partitions, because subvolumes automatically snag whatever space they need from the parent volume as you add data to them. So you can go ahead and copy some files into the test subvolume. You'll need root permissions, or you can futz with the file permissions in the usual way and change them to an unprivileged user.

Now let's create a snapshot:

# btrfs subvolume snapshot btrfs-volume/test btrfs-volume/test-snapshot-1Create a snapshot of 'btrfs-volume/test' in 'btrfs-volume/test-snapshot-1'

Snapshots are very efficient because multiple snapshots share the same original files and copy only the changes. You can list all the snapshots in the same volume; you need to name one of them and then all of them are displayed:

# btrfs subvolume list btrfs-volume/testID 256 top level 5 path testID 257 top level 5 path test-snapshot-1

This also shows that Btrfs sees snapshots and subvolumes as the same things. Your snapshots can be copied elsewhere as backups, or mounted independently to different mountpoints. Want to roll back to an earlier snapshot? First set the snapshot as the default. You need the snapshot ID, and then the path:

# btrfs subvolume set-default 257 btrfs-volume/

Then unmount the subvolume, and then remount:

# umount btrfs-volume# mount -t btrfs /dev/sda8 btrfs-volume

Is that not cool? After creating subvolumes you don't need to mount the parent volume.

Btrfs is still rough around the edges, and the documentation and administration tools are incomplete. If you've used ZFS then Btrfs feels like a clunky copy, because administering ZFS is faster and easier. ZFS has a several-year head start on Btrfs, though. I expect Btrfs will improve rapidly as it becomes more widely used.

Learn more about Btrfs with The Linux Foundation's free Linux training tutorial Introduction to Btrfs. A full course schedule of Linux SysAmdin training is also available.

Comments (4)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here

Organizing Open Source Efforts at NASA

"When I think of open source, Linux is the core," says William Eshagh, a technologist working on Open Government and the Nebula Cloud Computing Platform out of the NASA Ames Research Center. Eshagh recently announced the launch of code.nasa.gov, a new NASA website intended to help the organization unify and expand its open source activities. Recently I spoke with Eshagh and his colleague, Sean Herron, a technology strategist at NASA, about the new site and the roles Linux and open source play at the organization.

NASA LogoEshagh says that the idea behind the NASA code site is to highlight the Linux and open source projects at NASA. "We believe that the future is open," he says. Although NASA uses a broad array of technology, Linux is the default system and has found its way into both space and operational systems. In fact, the websites are built on Linux, the launch countdown clock runs on Fedora servers, and Nebula, the open-source cloud computing project, is Ubuntu based. Further, NASA worked with Rackspace Hosting to launch the OpenStack project, the open source cloud computing platform for public and private clouds.

Why is NASA contributing to open source? Eshagh says that NASA's open systems help inspire the public and provide an opportunity for citizens to work with the organization and help move its missions forward. But the code site isn't only about sharing with the public and making NASA more open. The site itself is intended to help NASA figure out how the organization is participating in open source projects.

In the initial phase, the code site organizers are focusing on providing a central location to organize the open source activities at NASA and lower the barriers to building open technology with the help of the public. Herron says that the biggest barrier is that people simply don't know what's going on in NASA because there is no central list of open source projects or contributions. Within NASA, employees don't even have a way to figure out what their colleagues are working on, or who to talk to within the organization about details such as open source licenses.

At NASA, the open source process starts with the Software Release Authority, which approves the release of software. Eshagh says that even finding the names of the people in the Software Release Authority was an exercise, so moving the list of names out front and shining a light on it makes it easier to find the person responsible. The new guide on the code site explains the software release guidelines and provides a list of contacts and details about releasing the software, such as formal software engineering requirements.

Phase two of the code project is community focused and has already started. Eshagh says there's a lot of interest in open source at NASA, including internal interest, but the open.NASA team is still trying to figure out the best way to connect people with projects within the agency.

Eshagh says that the third phase, which focuses on version control, issue tracking, documentation, planning, and management, is more complicated. He points to the Goddard General Mission Analysis Tool (GMAT), an open source, platform independent trajectory optimization and design system, as an example. There is no coherent or coordinated approach to develop software and accept contributions from the public and industry. "What services do they need to be successful? What guidance do they need from NASA?," Eshagh wonders. "We're trying to find best-of-breed software solutions online; GitHub comes to mind," he says.

Phase three also will include the roll out of documentation systems and a wiki, which Eshagh and his team want to offer as a service, but in a focused, organized way to help projects move forward. He says that NASA doesn't promote any project or product – they just want the best tool for the job. "We're taking an iterative approach and making information available as we get it and publish it," Herron says. They've already received a bunch of feedback about licenses, for example.

How will the open.NASA team measure the success of the code site and their other efforts? "We're trying to build a community," Eshagh explains. "We've kind of tapped into an unsatisfied need for public and private individuals to come together."

He says they'll measure success by how many projects that they didn't previously know about come forward and highlight what they are doing. "People are actually reaching out and I think that's a measure of success," he says. Also, the quantity and quality of the projects and toolchains, as well as how many people use them, will be considerations.

In December, Eshagh announced NASA's presence on GitHub, and their first public repository houses NASA's World Wind Java project, an open source 3D interactive world viewer. Additional projects are being added, including OpenMDAO, an open-source Multidisciplinary Design Analysis and Optimization (MDAO) framework; NASA Ames StereoPipeline, a suite of automated geodesy and stereogrammetry tools; and NASA Vision Workbench, a general-purpose image processing and computer vision library.

In March 2011, NASA hosted its first Open Source Summit at Ames Research Center in Mountain View California. GitHub CEO Chris Wanstrath and Pascal Finette, Director of Mozilla Labs, were among the speakers. Eshagh says that at the event, he learned that when Erlang was first released on GitHub, contributions increased by 500 percent. "We are hoping to tap into that energy," he adds.

"A lot of our projects were launched under SVN and continue to be operated under there," Eshagh says. Now open.NASA is looking at git-svn to bridge these source control systems.

"A lot of projects don't have change history or version control, so GitHub will help with source control and make it visible and available," Herron adds. Since they've posted the GitHub projects, Eshagh and his team have already seen some forks and contributions back, but he says the trick is to figure out how to get the project owners to engage and monitor the projects or to move to one system.

Which NASA projects would Eshagh like to see added to GitHub? "We have so many projects, we don't have favorites," he says. "If this is a viable solution, increases participation, and makes it easier for developers to develop, then we'd like to see them there." He adds that his team would like to see all of NASA's open source projects have version control, use best practices, and be handled in a way that the public can see them.

At the end of 2011, Nick Skytland, Program Manager of Open Government at the Johnson Space Center, posted the 2011 Annual Report by the NASA Open Government Initiative. His infographic says that there were 140,000,000 views of the NASA homepage; 17 Tweetups held with more than 1,600 participants; 2,371,250 combined followers on Twitter, Facebook, and Google+; and 50,000 followers on Google+ within the first 25 days.

"The scope and reach of our social media is not insignificant," Eshagh says. Herron points out that the nasa.gov site is the most visited US government site. He says that the community is very engaged. "People love to see our code," he adds. "People are excited about it." In fact, the open source team hopes to use their code to keep people excited about the space program.

NASA is now in an interesting phase, Eshagh explains. He says that after the space shuttle program ended last year, NASA Deputy Administrator Lori Garver was speaking to a group of students when one of them asked her whether she's now out of a job. (She's not.) The new code site helps illustrate how many projects are still active and growing at NASA. "We still have a lot of work to do and a lot of people are pulling for us," Eshagh says.

Comments (0)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here

Weekend Project: Learning Ins and Outs of Arduino

Arduino is an open embedded hardware and software platform designed for rapid creativity. It's both a great introduction to embedded programming and a fast track to building all kinds of cool devices like animatronics, robots, fabulous blinky things, animated clothing, games, your own little fabs... you can build what you imagine. Follow along as we learn both embedded programming and basic electronics.

Arduino was invented by Massimo Banzi, a self-taught electronics guru who has been fascinated by electronics since childhood. Mr. Banzi had what I think of as a dream childhood: endless hours spent dissecting, studying, re-assembling things in creative ways, and testing to destruction. Mr. Banzi designed Arduino to be friendly and flexible to creative people who want to build things, rather than a rigid, overly-technical platform requiring engineering expertise.

The microprocessor revolution has removed a lot of barriers for newcomers, and considerably speeded up the pace of iteration. In the olden days building electronic devices means connecting wires and components, and even small changes were time-consuming hardware changes. Now a lot of electronics functions have moved to software, and changes are done in code.

Arduino is a genuinely interactive platform (not fake interactive like clicking dumb stuff on Web pages) that accepts different types of inputs, and supports all kinds of outputs: motion detector, touchpad, keyboard, audio signals, light, motors... if you can figure out how to connect it you can make it go. It's the ultimate low-cost "what-if" platform: What if I connect these things? What if I boost the power this high? What if I give it these instructions? Mr. Banzi calls it "the art of chance." Figure 1 shows an Arduino Uno; the Arduino boards contain a microprocessor and analog and digital inputs and outputs. There are several different Arduino boards.

Figure 1: Arduino Uno.You'll find a lot of great documentation online at Arduino and Adafruit Industries, and Mr. Banzi's book Getting Started With Arduino is a must-have.

The world is over-full of useful garbage: circuit boards, speakers, motors, wiring, enclosures, video screens, you name it, our throwaway society is a do-it-yourselfer's paradise. With some basic skills and knowledge you can recycle and reuse all kinds of electronics components. Tons of devices get chucked into landfills because a five-cent part like a resistor or capacitor failed. As far as I'm concerned this is found money, and a great big wonderful playground. At the least having a box full of old stuff gives you a bunch of nothing-to-lose components for practice and experimentation.

The Arduino integrated development environment (IDE) is a beautiful creation. The Arduino programming language is based on the Processing language, which was designed for creative projects. It looks a lot like C and C++. The IDE compiles and uploads your code to your Arduino board; it is fast and you can make and test a lot of changes in a short time. An Arduino program is called a sketch. See Installing Arduino on Linux for installation instructions.Figure 2: A sketch loaded into the Arduino IDE.

You will need to know how to solder. It's really not hard to learn how to do it the right way, and the Web is full of good video howtos. It just takes a little practice and decent tools. Get yourself a good variable-heat soldering iron and 60/40 rosin core lead solder, or 63/37. Don't use silver solder unless you know what you're doing, and lead-free solder is junk and won't work right. I use a Weller WLC100 40-Watt soldering station, and I love it. You're dealing with small, delicate components, not brazing plumbing joints, so having the right heat and a little finesse make all the difference.

Another good tool is a lighted magnifier. Don't be all proud and think your eyesight is too awesome for a little help; it's better to see what you're doing.

Adafruit industries sells all kinds of Arduino gear, and has a lot of great tutorials. I recommend starting with these hardware bundles because they come with enough parts for several projects:

Adafruit ARDX – v1.3 Experimentation Kit for Arduino This has an Arduino board, solderless breadboard, wires, resistors, blinky LEDs, USB cable, a little motor, experimenter's guide, and a bunch more goodies. $85.00.9-volt power supply. Seven bucks. You could use batteries, but batteries lose strength as they age, so you don't get a steady voltage.Tool kit that includes an adjustable-temperature soldering iron, digital multimeter, cutters and strippers, solder, vise, and a power supply. $100.

Other good accessories are an anti-static mat and a wrist grounding strap. These little electronics are pretty robust and don't seem bothered by static electricity, but it's cheap insurance in a high-static environment. Check out the Shields page for more neat stuff like the Wave audio shield for adding sound effects to an Arduino project, a touchscreen, a chip programmer, and LED matrix boards.

Let's talk about volts (V), current (I), and resistance (r) because there is much confusion about these. Volts are measured in voltage, current is measured in amps, and resistance is measured in ohms. Electricity is often compared to water because they behave similarly: voltage is like water pressure, current is like flow rate, and resistance is akin to pipe diameter. If you increase the voltage you also increase current. A bigger pipe allows more current. If you decrease the pipe size then you increase resistance.

Figure 3: Circuit boards are cram-full of resistors. You will be using lots of resistors.Talk is cheap, so take a look at Figure 3. This is an old circuit board from a washing machine. See the stripey things? Those are resistors. All circuit boards have gobs of resistors, because these control how much current flows over each circuit. The power supply always pushes out more power than the individual circuits can handle, because it has to supply multiple circuits. So there are resistors on each circuit to throttle down the current to where it can safely handle it.

Again, there is a good water analogy — out here in my little piece of the world we use irrigation ditches. The output from the ditch is too much for a single row of plants, because its purpose is to supply multiple rows of plants with water. So we have systems of dams and diverters to restrict and guide the flow.

In your electronic adventures you're going to be calculating resistor sizes for your circuits, using the formula R (resistance) = V (voltage) / I (current). This is known as Ohm's Law, named for physicist Georg Ohm who figured out all kinds of neat things and described them in math for us to use. There are nice online calculators, so don't worry about getting it right all by yourself.

That's all for now. In the next tutorial, we'll learn about loading and editing sketches, and making your Arduino board do stuff.

Comments (0)Add Comment
You must be logged in to post a comment. Please register if you do not have an account yet.
busy

View the original article here